Skip to content

The Case for Developer Support throughout your Optimization Program

When it comes to Optimization, most programs at least acknowledge the need for developer support at different points in the process.  It has been my experience however that it’s rare that a program includes a developer the entire way through the process, from test planning to deployment.   Development staffing can be hard to come by (and expensive) and so teams may feel the need to exclude them from anything not directly development (IE – programming) related.  

I feel this is a mistake.  I can say in my time as a test developer, I found the our optimization program was able to both move faster, and have less rework by including a developer earlier in the process. They were able to bring to light issues as early as the submission phase, rather than waiting until the QA phase to find a concept that didn’t work correctly.

If you externalize all of the test development completely you may still find that your program runs smoother by running things by an internal resource ahead of time. They will either know (or can find out) critical details which may make a concept workable by the external agency.  This may seem counterintuitive – why include them on not developmental discussions?  Simply – the earlier in the process you can identify issues, the easier and less expensive it is to pivot, or cancel development entirely.   Wouldn’t you want to know if your house is going to cave in prior to building it?  That’s basically the value having a developer look at the plan prior to building can bring.

What skills should the developer have?

I’ve spoken on this before (check out the webinar), but you’re looking for a more senior developer ideally.  Specifically they should understand things:

  • How the website works, in detail
  • How the test platform works, in detail
  • How any integrations (tag management, analytics, and so on)  work.

They should be able to translate the business requirements into technical requirements, and then see if that is compatible with the existing site.  Can the test be run as a client side campaign?  Would that need to be run as a server side campaign?  They should be able to flag anything which doesn’t pass the smell test either at a quick review, or more intensive research exercise.  The developer does not need to know everything – but they should be well versed in the organization enough to find the answers by speaking with the appropriate technical teams.

The developer should specifically be looking for anything that would affect the validity of the experiment, or stability / performance of the site so a risk assessment can occur.  Ideally, they will offer potential solutions or tradeoffs when objecting, and can phase it in a way that non-technical folks can understand.

How they can contribute to the process…

At Submission

Before calling a planning meeting to go over all the incoming proposals – it may be worth a quick check with your friendly on staff developer in order for them to flag things that could prompt you to hold off or reconsider moving forward.   What we’re trying to avoid by having this discussion is giving time for the developer to object for some reason that could substantially affect the test development rather getting this information in the planning or kickoff discussions when you have a meeting full of people.  It’s far better to know this up front so you can accommodate that in your planning and timelines.  

As an added bonus, it may prevent people who attend such meetings from having their time wasted to discuss something that either isn’t possible, or only possible with considerable investment (beyond what could easily be approved on their own).  It also typically costs less (in time / money) for the analyst and developer to speak – then staff a full meeting to have the same discussion (as is apt to be the case should this happen later in the process).

Example:  There is a desire to test the checkout flow (to display loyalty reward balance in a place it doesn’t currently exist), but the developer flags that this would be a major undertaking, requiring server side development and extensive regression testing, and would have to be slotted into a major release of the codebase – as the reward balance is not easily obtained at that point in the funnel already.  This critical context alerts that any development along this concept is going to be tightly intertwined with whatever the IT department has currently on their roadmap and can now be discussed during planning, rather than being found in planning and being delayed until a subsequent meeting.

In Planning

In planning the developer may be able to speak to anything that is going on deployment wise or within IT that may affect the test deployment window.  For example you have a test you want to run on Thursday, but they’re doing a code release Thursday morning, so IT may want you to delay the test launch until after they have completed their QA of the production deployment.

They can also mention any red flags which may have been discovered since the discussion that happened after submission, but ideally these would be rare.  The goal is to have any test discussed at planning to have already been vetted to at least be technically feasible provided the timing window can work with anything already in flight or planned.

Given the knowledge of the intended campaign at this point they should be able to speak to the approximate complexity and possibly estimate the development time required.

In Kickoff

In the kickoff, the developer is specifically interested in identifying if anything has changed since the above processes which may ultimately change the technical design or planning.  They may specifically provide information relevant for design, the analysts, or the QA team such as specification needs for various viewports, highlight specific tests that should be considered and the like.  The development estimate should be confirmed and they should seek to be made aware of any information outstanding / found to be needed for any of the other parties.

After this process the project will move into development and so this is the last chance to obtain alignment prior to the development / QA process.  Once past this point the cost to rework / redesign anything can increase dramatically and the developer should seek to prevent that if at all possible by alerting the project team to anything that may endanger the development or timeline.

In Development

Development is a interesting concept in Optimization because it comes down to three paths:

External

In what is likely the most common case – an external agency would handle all the development work. However, an internal developer should seek to provide all the relevant details required for the campaign build.   It has been my experience that if this spec work is not done properly the external agency could do one of two things when questions arise:

1:  They can stop development and seek clarification – which may kill the timeline / intended launch window.  It is totally possible to lose days at a time between the back and forth.

2:  They can guess.  Guessing has a chance to work out – but sometimes the agency will guess wrong, and that likely won’t be caught until one of the later phases.   You want to avoid this because it’s more expensive in time and money to catch errors in QA, then return them to the agency to address.  This can result in significant timeline slippage due to the back and forth.

The way to solve both of the above is the same:  Over Communicate.  The developer should seek to provide all the relevant information to technically construct the campaign in the best way possible so that it doesn’t break.   I also advise having the developer look over the campaign brief before sending it to the agency for the same reason.  They can identify sections which need more detail for developmental efforts.  The brief could say “hero image” but the internal developer could list the specific div id which eliminates any possible confusion of what part of the page is being talked about.

This may involve the developer working with internal teams to set the external agency up for success ahead of time (by modifying the CMS or page template, providing specific div IDs, and the like).   What we’re trying to avoid with this step is the test breaking once it is in production because a seemingly unrelated site update broke something the test was dependent on.  

Internal Client Side

A more rare scenario – organizations with this workflow have a staffed developer trained in the testing platform and build the tests in house in the same way an agency or vendor would.   This workflow offers some benefits.

  • Turn around time is often not impacted by other clients being serviced.
  • Ability to access / reference source code and documentation directly.
  • Ability to work with other internal teams for the best result.
  • Ability to modify the site code to result in a better test experience.
  • Potentially a quicker bug fix cycle since rework can be communicated directly to the developer via internal channels/tickets.
  • Lastly – if necessary time can be bargained for with the developer’s manager to speed development or resolution of problems.

Internal Server Side

An even rarer scenario then the above two – organizations with this workflow tightly bind their optimization and testing efforts with dedicated developers or product teams.  This is commonly seen with feature testing or complex test development that is not suitable for a client side test delivery.  Here the internal testing developer can play a role by:

  • Advising internal teams on the testing requirements and relevant integrations.
  • Developing technical specifications/use cases for the product/feature teams.
  • Ability to modify the site code to result in a better test experience.
  • Assist in running end to end testing.
  • Act as a bridge between the product team and optimization program to ensure both sides needs are served and understood.

In Quality Assurance / User Testing

While it’s common for agencies / vendors to do QA before turning the campaign back over before acceptance testing, I strongly encourage programs to run their own internal QA.  An internal developer can prove invaluable here by:

  • Reviewing the code of the agency / vendor to make sure it does what it’s supposed to do, and isn’t doing anything unexpected / insecure / in a way that will affect performance.
  • Stand ready to answer questions for QA and if an internal build – fix the issue.
  • Stand ready to answer questions for QA, and if an external build – help write communicate the details (in depth if needed) to the external agency / vendor.

At Deployment

At deployment time a developer may have a few roles:

  • Manage any interaction with an existing Change Management process ahead of the desired deployment.
  • They may be the ones to actually be able to launch or disable the test dependent on company policy.
  • They may be the ones who serve on call in the event the test misbehaves after deployment.
  • They may assist QA in verifying the deployment worked correctly.
  • They may alert other internal teams upon successful deployment.

While one or more can be done by non-developer staffing, I feel it’s critical that the folks responsible for ensuring the site is working be made aware of changes in the site behavior (such as by launching an A/B test).  Ensuring that communication between the dev ops folks and the optimization program is important to avoid people being paged off hours because the site started doing something in a way that wasn’t expected or communicated properly.

What if I can’t get a full time developer to support my program?

A common scenario is that optimization programs struggle to justify (or locate) developer talent.   If the program can’t justify a permanent developer, all hope is not lost.   What I would recommend in this case is to attempt to steal a few hours a month of a senior developers time if possible.  You can outsource the majority of the test development (in some cases) but you want the support for the internal alignment and the reduction in rework / bug tickets.  You may be able to justify this by cataloging just how much rework is done, or how many requests for more information come in from the external partner.  

Sell the vision to management:  The goal is to increase velocity by reducing the amount of time it takes to get a test out the door.  The best way to do that is by reducing the odds you have to rework a significant amount of the development, which is done by identifying as many issues as possible in planning and design.  So we come full circle, because the best way to identify those issues – is by having a developer present as early in the process as possible.

If you want to develop the tests quickly without the rework a tightly integrated feedback loop is required across the team.  It can be a hard sell, but when done right, you’ll find it will have been totally worth it.

Published inA/B Testing