Author: Clare Chambers

Is your digital marketing apprentice on the right ‘track’?

A survey conducted by MarketingProfs found that 7 in 10 executives at large companies and agencies said their digital marketing teams are strong in some areas but mediocre or weak in others; 21% said their employees are mediocre or weak across all areas.

The most cost effective solution for businesses is to upskill their workforce through a government funded apprenticeship programme, which is designed and developed by selected groups of industry experts and organisations, also known in further education as ‘trailblazer groups’.

Like all apprenticeship standards, digital marketer level 3 is a practical, skills-based apprenticeship standard that is designed to teach apprentices all the latest digital marketing techniques and campaign-building strategies, which are distinct from those taught on its ‘sister’ standards; marketing assistant level 3, marketing executive level 4 and marketing manager level 6.

While it really goes without saying that skills providers should seek to deliver the most role-relevant/industry aligned content that apprentices can apply instantly in the workplace, this is not always the case.

Many providers simply know that it’s a profitable programme and don’t sufficiently understand digital marketing as a subject and industry. Hence, in terms of learning content, they assume that one-size-fits-all roles, which is often far from reality. For example, trainers have often complained to me that the Google Analytics vendor qualification, albeit free, is too easy or unrelated to the role of a particular apprentice and it is often at the discretion of the EPAO as to whether course equivalents are permitted.

Digital marketing career paths

The digital marketing industry is full of diverse opportunities for ambitious, creative- and technical minded professionals.

According to multiple sources, the demand for skills in digital marketing and analytics has been rising sharply since the onset of the pandemic in 2019, with 8 of the top 10 most sought after skills relating to more strategic and account based marketing (ABM) expertise.

Where will these opportunities take aspiring digital marketers? What does a typical digital marketing career path look like?

The answer is more complex than you would expect. While a “digital marketer” can be broadly defined as a professional who works to promote brands and products through digital channels, the arc of a digital marketer’s career often depends on where they choose to specialise.

According to a report by The Economist, key skills that digital marketing professionals should focus on include engagement and technology, strategy and analytics. But, before selecting a particular pathway, a skills provider should be familiar with the different career routes in digital marketing, and how they fit with the learner’s and employer’s current situation and growth plans. Trends such as the “no code movement” should also be taken into account.

Here are some questions to consider (and ask apprentices):

  • Do you want to freelance or join an agency?
  • Would you class your skills as more technical or creative?
  • Would you prefer to be front and centre interacting with customers, or in the back office?
  • How well do you work with numbers?
  • Do you have any business experience yet (freelancing and “side-hustling” counts nowadays – particularly in creative & digital media)?
  • How ‘deep’ are your technical skills? Or do the words ‘coding’ and ‘analytics’ turn you off?
  • Have you heard of the term ‘analytics’ and what can you tell me about it?

Many skills do overlap, however, so having a solid foundation in multiple areas of digital marketing can increase an apprentice’s desirability as a job candidate because a digital marketer will work in a team but have responsibility for certain aspects of a campaign. And, usually, the bigger the marketing department, the more siloed the job description tends to become.

Of course, if the apprentice is planning to go into a leadership role, then they’ll need to become more “t-shaped” later on.

Getting on the right track

Fortunately, there are different specialties or “tracks” you can offer as a skills provider on the digital marketer level 3 standard: Standard, Social Media and SEO. Each pathway has their own focus – while also covering the other areas more broadly – and will help an apprentice to develop the knowledge, skills and behaviours required to define, design, build and implement campaigns across a variety of online, digital and social media platforms.

At level 3, the digital marketer standard is most suitable for those at the beginning of their career, be that new to work entirely or looking to upskill or reskill into one of the fastest growing industries in the world.

On completion of the programme, the apprentice will also have achieved accredited qualifications and will be able to apply for a place on the independent Register of IT Technicians and join the Chartered Institute for Marketing (CIM) as an Affiliate (Professional) member.

In my experience, programmes which offer masterclasses and workshops built in partnership with digital marketing industry experts tend to be of the best quality and the most engaging for learners. +24 Academy provides a great example of this approach to learning content design.

The three possible pathways

  • Digital Marketer L3 (Standard Pathway)

The standard pathway is a balance between the social media and SEO pathways, which provides an insight into digital marketing as a whole with a blended focus.

As apprentice will usually be part of a marketing team and report into a marketing or IT manager, working to briefs and instructions which are focussed on driving customer acquisition, as well as engaging with and retaining existing customers. Over time, s/he will take on increased responsibilities and may begin to lead on certain aspects of plans or campaigns during this programme.

Common job titles later on may be Digital Marketing Assistant, Digital Marketing Executive, Digital Marketing Coordinator, Campaign Executive, Social Media Executive, Content Coordinator, Email Marketing Assistant, SEO Executive, Analytics Executive, Digital Marketing Technologist, or similar depending on the company.

Explore this excellent guide on defining digital marketing job titles.

Example programme layout: 

Months 1-4 Months 5-7 Months 7-15
Programme introduction & expectations Marketing principles masterclasses Marketing principles revision & examination BCS Level 3 in Marketing Principles Principles of coding masterclasses Principles of coding revision & examination BCS Level 3 in Principles of Coding Google Analytics IQ Certification Plan and execute three digital marketing campaigns Evaluation of campaigns & portfolio evidence Preparation for End Point Assessment (EPA)
  • Digital Marketer L3 (Social Media Pathway)

The social media pathway uses the standard as its base, ensuring all digital marketing topics are covered, alongside specific social media workshops that will give you an insight into social platforms, analytics, listening tools and visual sharing. This gives you the ultimate digital marketing programme, with a social media edge. And will help you develop the knowledge, skills and behaviours required to define, design, build and implement digital campaigns across a variety of online and social media platforms.

Common job titles in social media may be “Social Media Marketing Assistant”, “Social Media Strategist”, “Social Media Executive”, “Community Coordinator”, “Campaign Executive”, “Social Media Content Coordinator”, “Community Manager”, or similar.

Explore this detailed guide to job titles within social media marketing.

Example programme layout: 

Months 1-4  Months 5-7  Months 7-15
Programme induction/bootcamp Marketing principles masterclasses Marketing principles revision & examination BCS Level 3 in Marketing Principles Principles of coding masterclasses Principles of coding revision & examination CIW Social Media Strategist Certification & Examination Google Analytics IQ Certification Facebook, HubSpot & Twitter Digital Marketing Certified Plan & execute three social media marketing campaigns Evaluation of campaigns & portfolio evidence Preparation for your End Point Assessment (EPA)
  • Digital Marketer L3 (SEO pathway)

The search engine optimisation (SEO) pathway uses the standard as it’s base, ensuring all digital marketing topics are covered, alongside specific SEO workshops that will give you an insight into search engines, search analytics and link building. This gives you the ultimate digital marketing programme, with an SEO edge. (Psst. it’s no secret in the digital industry that SEO professionals who can code often earn a lot more than those who cannot!)

Common job titles for apprentices at the end of this pathway may contain the terms “Senior SEO manager”, “Head of SEO”, “SEO content writer”, “SEO account manager” and “Marketing manager SEO”, and “SEO digital marketing”.

Example programme layout: 

Months 1-4 Months 5-7 Months 7-15
Programme induction/bootcamp Marketing principles masterclasses Marketing principles revision and examination BCS Level 3 in Marketing Principles Principles of coding masterclasses Principles of coding revision & examination BCS Level 3 in Principles of Coding CIW Site Development Associate Google Analytics IQ Certification (beginner, advanced & power users) Plan & execute three SEO campaigns Evaluation of campaigns & portfolio evidence Preparation for End Point Assessment (EPA)

The examples are based on BCS (British Computer Society) syllabi but would look similar for those of other awarding bodies and end-point assessment organisations that deliver digital marketer level 3.

Explore this detailed guide to SEO career paths.

Employers may also find this SEO job market report useful.

From concept to launch, if you need advice on module content or help with designing a bespoke assessment plan, which reflects your learners’ needs as well as their long term career goals – or even if you’re simply unsure which track could benefit your learner(s) and organisation, reach out here.

Validity: the most critical indicator of test & evidence quality.

“Can a questionnaire be reliable but not valid?

In my work as an IQA, I’ve often encountered questions like the above regarding the ‘validity’ of evidence for a learner’s portfolio, and so I felt that this is a topic worth covering in more detail, particularly for less experienced on-programme assessors.

Validity and reliability (along with fairness) are considered two of the core principles of high quality assessments. Though these two qualities are often spoken about as a pair, it is important to note that an assessment can be reliable (i.e., have replicable results) without necessarily being valid (i.e., accurately measuring the skills it is intended to measure), but an assessment cannot be valid unless it is also reliable. 

Validity is arguably the most important criteria for the quality of a test. The term validity refers to how well a test measures what it is supposed to measure. Valid assessments produce data that can be used to inform educational decisions at multiple levels, from improving training provision and effectiveness to evaluating assessors impact, to individual learner gains and performance.

However, validity is not a property of the test itself; rather, it is the degree to which certain conclusions drawn from the test results can be considered “appropriate and meaningful.” The validation process includes the assembling of evidence to support the use and interpretation of test scores based on the concepts which the test is designed to measure, known as constructs.

If a test does not measure all the skills within a construct, the conclusions drawn from the test results may not reflect the learner’s knowledge accurately, and thus, threaten its overall validity. 

To be considered valid, “an assessment should be a good representation of the knowledge and skills it intends to measure,” and to maintain that validity for a wide range of learners, it should also be both “accurate in evaluating students’ abilities” and reliable “across testing contexts and scorers.” (Source)

On a test with high validity the items will be closely linked to the test’s intended focus. For many certification and professional licensure tests this means that the items will be highly related to a specific job or occupation. If a test has poor validity then it does not measure the job-related content and competencies it ought to. When this is the case, there is no justification for using the test results for their intended purpose. 

Factors Impacting Validity

Before defining validity, how it is measured and differentiating between the different types of validity, it is important to understand how external and internal factors can impact validity.

A learner’s literacy level can have an impact on the validity of an assessment. For example, if a learner struggles to understand what a question is asking, a test will obviously not be an accurate assessment of what the learner truly knows about a subject. Educators and assessors should, therefore, confirm that an assessment is at the correct reading level of the learner.

Learner self-efficacy can also impact validity of an assessment. If learners have low self-efficacy, or beliefs about their abilities in the particular area they are being tested in, they will typically perform lower. Their own doubts hinder their ability to accurately demonstrate knowledge and comprehension.

The anxiety levels of a learner is also a factor to be aware of. Learners with high ‘test anxiety’ will underperform due to emotional and physiological factors, which can lead to a misrepresentation of their levels of knowledge and ability.

Evidencing Validity

In terms of the types of evidence that can be used for evaluating validity – including in the case of that recorded for apprenticeship portfolios – may include: 

  • Evidence of alignment, such as a report from a technically sound independent study documenting alignment between the assessment and its test blueprint, and between the blueprint and the government’s standards;
  • Evidence of the validity of using results from the assessments for their primary purposes, such as a discussion of validity in a technical report that states the purposes of the assessments, intended interpretations, and uses of results; 
  • Evidence that scores are related to external variables as expected, such as reports of analyses that demonstrate positive correlations with 1) external assessments that measure similar constructs, 2) trainers’ judgments of learner readiness, or 3) academic characteristics of test takers. 

Types of Validity

As well as the main types of validity which are explained below, there are also several ways to estimate the validity of a test including content validity, concurrent validity, and predictive validity. The “face validity” of a test is sometimes also mentioned. 

Content Validity

While there are several types of validity, the most important type for most certification and licensure programmes is probably that of content validity. Content validity is a logical process where connections between the test items and the job-related tasks are established.

If a thorough test development process was followed, a job analysis was properly conducted, an appropriate set of test specifications were developed, and item writing guidelines were carefully followed, then the content validity of the test is likely to be very high. 

Content validity is typically estimated by gathering a group of subject matter experts (SMEs) together to review the test items. Specifically, these SMEs are given the list of content areas specified in the test blueprint, along with the test items intended to be based on each content area. The SMEs are then asked to indicate whether or not they agree that each item is appropriately matched to the content area indicated. 

Any items that the SMEs identify as being inadequately matched to the test blueprint, or flawed in any other way, are either revised or dropped from the test. 

Assessors should ask: Do assessment items/components adequately and representatively sample the content area(s) to be measured?

Concurrent Validity

Concurrent validity measures how well a new test compares to an well-established test. It can also refer to the practice of concurrently testing two groups at the same time, or asking two different groups of people to take the same test.

Once the tests have been scored, the relationship is estimated between the examinees’ known status as either masters or non-masters and their classification as masters or non-masters (i.e., pass or fail) based on the test. This type of validity provides evidence that the test is classifying examinees correctly. The stronger the correlation is, the greater the concurrent validity of the test is.

Read more about specific examples of concurrent validity here.

Construct Validity

Construct is a hypothetical concept, used more often in the field psychology, that’s a part of the theories that try to explain human behaviour, e.g. intelligence and creativity. This type of validity tries to answer the question: “How can the test score be explained psychologically?” 

Construct validity consists of obtaining evidence to support whether the observed behaviours in a test are (some) indicators of the construct. The validation process is in continuous reformulation and refinement. This is due to the fact that you can never fully demonstrate a ‘construct’.

Example construct validation process when designing an assessment:

  • Based on the theory held at the time of the test, the examiner deducts certain hypotheses about the expected behavior of people who get different test scores.
  • Next, they gather data that confirms or denies those hypotheses.
  • Taking into account the gathered data, they decide whether the theory adequately explains the results. If that isn’t the case, they review the theory and repeat the process until they get a more accurate explanation. (Adapted from here.)

Assessors should ask: Do assessments and the assessment system measure the content they purport to measure?

Predictive Validity

Another statistical approach to validity is predictive validity. This approach is similar to concurrent validity, in that it measures the relationship between examinees’ performances on the test and their actual status as masters or non-masters. However, with predictive validity, it is the relationship of test scores to an examinee’s future performance – whether or not they have mastery of the content – that is estimated. In other words, predictive validity considers the question, “How well does the test predict examinees’ future status as masters or non-masters?”

For this type of validity, the correlation that is computed is between the examinees’ classifications as master or non-master based on the test and their later performance, perhaps on the job. This type of validity is especially useful for test purposes such as selection or admissions.

Assessors should ask: How well do assessment instrument predict how well candidates will do in future situations?

Face Validity

Like content validity, face validity is determined by a review of the items and not through the use of statistical analyses. Unlike content validity, face validity is not investigated through formal procedures and is not determined by subject matter experts. Instead, anyone who looks over the test, including examinees/candidates and other stakeholders, may develop an informal opinion as to whether or not the test is measuring what it is supposed to measure.

While it is clearly of some value to have the test appear to be valid, face validity alone is insufficient for establishing that the test is measuring what it claims to measure. A well developed examination programme will include formal studies into other, more substantive types of validity.

In summary, the validity of a test is the most critical indicator of test quality because, without sufficient validity, test scores have no meaning. The evidence you collect and document about the validity of your test is also your best legal defence should the examination programme ever be challenged in a court of law.

While there are several ways to estimate validity, for many certification and professional examination programmes the most important type of validity to establish is content validity.

How to provide written feedback that doesn’t crush a learner’s confidence.

While it isn’t the sexiest of topics in education, or even in the field of assessment, the task of giving feedback is more complex, and can certainly be more influential on learners’ progress, than assessors usually realise. My own experience as an external examiner in professional education and a vocational assessor in further education has highlighted that this topic needs a lot more time and attention.

The type, tone, quality and timing of feedback can, in fact, make or break a learner’s confidence in a subject because they need to feel that their tutor is invested in helping them to reach their potential.

Moreover, feedback is consistently ranked in the worst performing areas on course experience questionnaires and surveys in the context of both further and higher education programmes. For example, students at numerous higher education institutions have cited strong emotional reactions in student surveys— both positive and negative — that feedback produced. Good feedback encouraged them and made them feel appreciated, while poor feedback simply lead to frustration.

Of course, there is also some contention that learners do not even notice or take onboard the feedback they receive and, conversely, that tutors and assessors are not sufficiently explicit when giving feedback to learners. Nevertheless, it can be deflating to spend days or weeks on a high stakes project only to receive vanilla feedback that sounds like anyone could have written it, let alone an expert in the subject.

High quality, timely feedback for learners on apprenticeships is even more critical because of the focus on summative (End-Point) assessment rather than continuous/formative assessment (i.e. assessment for learning), which is how apprenticeships were previously assessed.

Portfolio feedback, vendor qualification scores, lesson time and sporadic review sessions are the only opportunities on an apprenticeship programme to detect the learner’s level of understanding and provide developmental feedback which could impact the final grade.

“All assessment is good if we get good feedback. If we don’t it’s useless.”

Association for Supervision & Curriculum Development (ascd.org)

Feedback Best Practice

For the benefit of inexperienced assessors, there are at least six different types of educational assessment (which I’ll detail in another article) but whatever the type, assessments generally have one of two purposes:

  1. Assessment of learning (summative)
  2. Assessment for learning (formative)

Assessments for learning – also known as assessment as learning – assesses a learner’s comprehension and understanding of a skill or lesson during the learning and teaching process. By contrast, assessment of learning is summative, i.e. done at the end of a unit or module, and assesses the learner’s achievement against a class or national benchmark or standard.

Here are some practical tips to motivate your learners to actually process and apply your formative feedback (assessment for learning) and to help reinforce their engagement with the course:

  • Always start from a position of goodwill – assume that the learner you are feeding back to has the best of intentions with their work, and communicate with your own best intentions in return. 
  • Focus on observations rather than interpretations – tell people what you saw, rather than giving them your analysis of their behaviour. 
  • When phrasing your observations, use ‘I’ rather than ‘you’ e.g, “I felt that” rather than you “you did this”
  • Concentrate on behaviours which can be adapted rather than personality traits which are inherent in the learner’s character. 
  • Give specific examples of the behaviour you are referring to by stating the facts without making a judgement
  • Encourage reflection by asking questions rather than making statements e.g,
    • How do you think that aspect of the project went?
    • What would you do differently to secure a more positive outcome if that situation happened again?
  • Avoid giving mixed messages. Starting a sentence positively and then adding a ‘but’ or  ‘however’ weakens the statement and lessens trust.
  • Be direct and concise so it’s easier for the person receiving the feedback to absorb the key points.
  • Monitor your tone. It’s easy for written communication to be misinterpreted so humour and sarcasm are best avoided.

Finally, remember that whether working in quality assurance, assessment or programme delivery, our role in the learner’s journey is to help them gain a new and objective view of their own performance in order to maximise their impact in future projects and in their professional life in the long term.

If you have any feedback tips or strategies to add to this list, please leave them in comments and share the article to help others.

Copyright © 2021 EPA Ready

Theme by Anders NorenUp ↑