The Fatal Flaw of A/B Tests: Peeking

Posted on by Paul Draper

A/B significance testing has become irresistibly simple. Plug a few numbers in an online calculator, and voila…statistically verified results.

But this on-demand verification is fatally flawed: Looking at results more than once invalidates their statistical significance. Every page refresh on your A/B test dashboard is tainting your outcome. Here’s why it happens and how you can fix it.

P-values and p-hacking

The p-value expresses the likelihood of false positives. P-hacking is the practice of calculating the p-value using one process but conducting the experiment using a different process.

An example from the XKCD web comic:

The subject calculates p = 0.01 based on the process “I think of a number and he guesses it.” However, the actual process was “…and repeats this until he gets it right.” Her p-value test was based on a different process than what was actually used.

A/B tests often have a similar problem. Pearson chi-squared, Fisher exact, Student t, etc. — all assume the following process, diagramed with Lucidchart below:

When followed, this process is mathematically guaranteed to have a false positive rate of only 5%.

However, most people want to (1) cut failed experiments as soon as possible and (2) promote successful experiments as soon as possible. So they refresh the test results (aka peek), hoping to obverse significance as soon as it happens.

The problem is that this is a different process than our p-value was created for.

Simulated example

Let’s see how much of a difference peeking makes. Suppose we target a conversion event with 20% baseline success and accept p < 0.10. Let’s consider what happens when (1) B also converts at 20% and (2) B converts at the modestly higher 22%.

No peeking

The chances for accepting A or B, over the size of the fixed sample:

  1. As expected, when there is no difference, the false positive rate is 5% for A and 5% for B.
  2. When there is a difference, A is favored, and detection likelihood increases with sample size.


The cumulative chances for accepting A or B when the p-value is checked every sample (min 200 samples):

  1. After 2000 samples, there is a combined 55% chance of incorrectly concluding that one is better the other — over five times the expected false positive rate of 0.10.
  2. When there is a difference, the chance of accepting the loser as the statistically significant winner jumps from nearly nothing to 10%.

The feedback loop has altered the process and destroyed the validity of the statistics.

What to do?

The simplest solution is use the significance tests as they were designed: with a fixed sample size. Simple, but not practical.

Boss: “Variation B is doing great! Let’s give all users that experience.”

Underling: “We can’t. We have to wait another month.”


Boss: “Variation B is doing terrible! Shut it off right away!”

Underling: “We can’t. We have to wait another month.”


Boss: “Was A or B better?”

Underling: “We couldn’t detect a significant difference.”

Boss: “Keep running it.”

Underling: “We can’t. That was our only chance.”

Alternatively, we can still peek at the results but account for the overconfidence that peeking causes. If we want p < 0.10, we’ll, say, accept only p < 0.02 on a particular peek. Naturally, it will takes much longer to reach significance. (This is in fact Optimizely’s approach, although instead of assuming continuous peeking, it adjusts only when the experimenter views the results.)

A different paradigm

So far, we’ve been getting rather cagey answers from statistics. The fundamental problem is that we are asking it the wrong question.

We don’t want to know which variation is better as much as we want to maximize success.

When asking the more direct question, statistics can assist us better. The “maximize success” problem is known as the multi-armed bandit problem, and its solution is iteratively adjusting the sampling ratio to favor success.

Using Thompson beta sampling and readjusting every 20 samples, below the mean sampling rates for B as the experiment progress:

As expected, the sampling gradually adjusts to the results. Armed with this new approach, let’s try the stopping problem again. We’ll declare a winner when the B sampling proportion is below 5% or above 95%. Below are the cumulative acceptance probabilities:

Oh no. Those numbers look very similar to p-test peeking! It turns out that Bayesian statistics are not immune to the peeking problem. The universe does not hand out free lunches.

Except the paradigm has shifted. Previously, we obsessively hit F5 on the test dashboard to avoid big losses, or to capitalize on big wins. But that’s no longer needed, as the statistical process makes those decisions for us.

Instead, we can safely and confidently test for as long as we have patience. By removing the urgent need to stop, we side-step the stopping problem altogether.


How costly are A/B tests? Below are the overall success rates for our various algorithms after 16,000 samples.

Each strategy makes a compromise between exploration vs. exploitation. Some do this better than others. Thompson beta sampling is the provably optimal strategy.

Summary of multi-armed bandit


  • Optimum strategy for maximum successes.
  • No requirement for predetermined sample sizes or other parameters.
  • Codifying the process arguably makes ad-hoc alterations (p-hacking) less likely.
  • Higher levels of significance become practical.
  • Unlimited peeking.
  • The test can incorporate prior knowledge or risk assessment, via the choice of the initial sampling weights.


  • Sampling ratios must be adjusted. Google Analytics content experiments already run multi-armed bandit tests for you, but for other tools you may need to use a calculator and update the sampling ratio yourself.
  • Convergence is slower relative to total sample size. A fixed 50/50 sampling ratio aims for the fewest total samples, whereas multi-armed bandit aims for the fewest total failures.

The appropriateness of Thompson sampling depends on how well its goal of maximizing test successes matches our objective.

Whatever your approach, make sure you apply the correct statistics to the correct process. You can even diagram it with Lucidchart!


More precisely, peeking and taking action on the test invalidates the significance. This is usually the intent; completely idle, unactionable curiosity is less frequent.

Know that bandit adjustment periods require the same attention to experimental design as fixed-ratio tests. If you know conversion rates are higher in the morning, p-value testing should include a full day; bandit sampling adjustment should include the same. If there is a delay between treatment and conversion, p-value testing should consider only sufficiently mature data; bandit sampling adjustment should consider the same. This seems obvious, but some have been surprised.

All simulations used can be found on GitHub.


Paul Draper graduated in Computer Science from BYU. He spends most of his time at Lucid in front-end development for Lucidchart. Paul likes sunny days, the Florida Gators, pot roast, the color red, tail recursion, and his wife and son. When not working, he thinks about cool projects he doesn’t have time for.

Posted in Thought Leadership | Leave a comment

Why Code Snobs Are Invaluable

Posted on by Matt Swensen

Matt Swensen

Some argue that “code snobs” waste time on trivia. They are accused of myopia and pedantry, and their peers claim that the effort they spend in crafting every detail in their code is a bad investment.  While this can occasionally be the case, I submit that their ideas and comments offer more benefit than cost in the long run.

I recently authored a change in our codebase that involved refactoring a few of our core JavaScript classes and adding a couple of new ones. Nothing too out of the ordinary. Our development workflow requires a different engineer to code review any commits before they go on to the master branch (and eventually out to production). I felt good about the solution that I had come up with for the particular task I was working on and was confident that it would pass the code review process with flying colors.

The engineer that I had requested for the code review—who had recently reworked our Selenium testing framework and is an active proponent of code quality for our team—left a comment on our pull request system for a particular area of the code I was submitting. He suggested a slight change, merely that some of the classes I had implemented were not using a pattern that some of their analogous counterparts were using, and that I should adjust them to better mirror that structure.

My knee-jerk reaction was to ignore the feedback because that would be the fastest way to being “done” and would gratify my laziness. I have more important work to get done before the end of this sprint, I mentally rationalized. (The downward pressure on code quality that the scrum methodology imposes is a topic for another day.) I was halfway through replying to the comment with reasons to just move forward and approve my pull request anyway when I realized that he was right. The classes really would be more semantic if they were consistent with their counterparts. My conscience got the best of me. I swallowed my pride and decided to take the additional 30 minutes to context-switch and make the change.

I realize that whether or not my two small JavaScript classes were implemented similarly to those around them probably has no real bearing on the reliability and performance of our large codebase. But small decisions like that add up, especially when they are made every day. Imagine how different your codebase would be if every engineer on your team pushed out the best quality code they could, every single commit, with unfaltering cleanliness and snobbery! Over the course of even a year, the difference would be significant.

The sum of those small code quality improvements then trickles to your users’ overall experience. And then back into your business. For a SaaS startup, the product’s code is the gift that keeps on giving, for the entire life of the company; the more effort that is put into quality, the more valuable that gift becomes.

Perhaps a “code snob,” then, is simply the term lazy programmers use to describe their more disciplined peers. If that’s the case, I want to be one.


Matt Swensen joined the Lucid team in the summer of 2013. He loves spending time with his wife and son, developing applications with bleeding-edge web technologies, and playing the drums. He is earning his master’s degree in Information Systems Management at BYU.

Posted in Thought Leadership | Leave a comment

Are You Ready To Commit? Developing A Professional Software Engineer Workflow

Posted on by Matt Dawson

Aspiring programmers often ask a question like, “What can I learn in X amount of time that will make me a star programmer?”, where X is way too little time to develop star programming skills.  There are many diverse skills needed to become a truly professional programmer.  It seems of late that the focus is all about learning the hottest language, knowing lots of algorithms and design patterns, and understanding the latest frameworks.  Those are all great and useful things to invest time in.  However, one area that seems to receive little attention from up and coming programmers is developing a workflow that leads to long term professionalism, no matter what language or framework they are using.

I like to think of myself as a Professional Software Engineer.  However, nothing makes me feel more unprofessional than when I create bugs, break the build, or cause other people extra work when it could have been avoided.  In software development, we can’t avoid every problem, but we can avoid many of them.  With plenty of unavoidable problems lurking around, there is no sense in wasting time on the ones we can avoid.

I have personally caused almost every class of avoidable problem and brought upon myself plenty of well-deserved shame.  Early in my career I thought, “Hey, no big deal, nobody is perfect.” Of course I would try to learn from my mistakes, but I found that I was making some of the same mistakes over and over.  At one point I had a conversation with my boss in which he challenged me to do better.  After some reflection, I realized that I could improve.

From that time until now, I have refined a software engineer workflow that helps me ensure I don’t cause unnecessary problems and also increases the quality of the code I produce.  I’ve also mentored and managed other programmers and tried to help them develop their own workflows.  I hope that by sharing an outline of my workflow, you can develop your own workflow that will increase your value as an engineer and help you become a true professional.

It’s worth mentioning that while this list is fairly long, the reality is that these steps have become part of how I work. When it comes time to commit a change, most of these items have already been considered and done.  However, having a formal workflow acts as a forcing function and ensures that I’ve done everything I intended to do before I submit my work.

1. Does it compile?

This may seem like a stupid thing to have in my workflow, but trust me, it’s not.  I’ve seen even extremely simple changes (my own and other people’s) cause a build to fail and cause work to grind to a halt until fixed.  You bring shame on yourself and your family if you break the master build because you failed to compile your changes before committing.  You may begin to notice sideways glances as your colleagues pass you in the lunchroom–you’ll know why.  You may say to yourself, “It’s just a minor change to fix a teeny tiny defect found in a code review, so I don’t have to worry about compiling, right?”  WRONG! Don’t be tempted to  commit without compiling.  If you are working in a language that isn’t compiled, then load the updated code and make sure that the parser is happy.  

2. Have I stepped through my changes?  

Again, this may seem simple and obvious, but you’d be surprised how many times programmers skip this step and problems are found later on that would not have been overlooked if the code had been stepped through with the programmer watching.  There may be exceptional cases when you can’t step through your change, but keep these to exceptions and don’t make excuses.  You probably won’t be able to step through every nook and cranny of your code, but you can step through the common code flows and make sure that the code is working the way you thought it would.  If you are tempted not to take the time to do this, you will probably have an inordinate amount of bugs crop up later. Bugs that are found later are harder to fix because you are not as fresh with the code and you’ll have to take time to re-immerse yourself.  You may even find that other people have worked around your bugs, not understanding what the real problem is. Unwinding layers of workarounds once the code is fixed can be more complex than fixing the broken code would have been in the first place.  It’s worth taking the time to step through your code.

3. Have I run the automated tests?  

You have automated tests…don’t you?  If you don’t have automated tests, you should seriously think about implementing some.  Start small, and add gradually.  Automated tests can save you a lot of time in the long run and at least ensure that when you make changes, you haven’t broken some base level of functionality.  There is plenty of information available about automated testing–if you don’t know how to do it, take the time to learn.  Bottom line: if you have automated tests available to you, take advantage of them by running them before you commit.  It will save you time in the long run, save you from having egg on your face if you’ve broken something, and make you appear more professional to your colleagues.  Of course, if you find a problem as a result of running your automated tests, don’t overlook it–fix it before you continue with the workflow.

4. Have I created unit tests for my new code?  

This one dovetails closely with the previous and probably doesn’t need much additional explanation.  If you are using a test-driven development approach, you will have done this as a matter of course.  In any case, when you figure out what your own workflow should be, don’t neglect to include creating unit tests.  This will add some upfront time, and yes, it’ll take longer initially, but it will pay dividends in the long run, especially if you and everyone else run them before committing.  Your code will be more robust from the get-go, and in the future, many bugs will be found and fixed before they are committed — which means no one will have to be slowed down by having to find a bug, figure out how to reproduce it, log it, figure out who to assign it to, etc.

5. Have I considered all “platforms” my change will affect?  

If you work in a shop where you have exactly one target platform, then you can skip this step.  However, for a large portion of programmers, there are multiple platforms involved. The platforms might be different web browsers, different operating systems, or different types of hardware.  If you are in this situation, it’s worth spending a moment to consider if your change could cause a problem on a different platform than the one you are developing on. In many cases it won’t and you can move on quickly, but by giving some thought before committing, you may detect problems and save yourself and others some hassle.  A few things to think about are memory availability, CPU speed, and API differences. You probably already know which of these is likely to affect you in your job–just formalize your process a bit and make sure that you think about these things every time you commit.  Also, you may want to take some time to research the differences between your various target platforms so you can be better informed.  The more you know about your platforms, the more you can take this into account as you code. This will be an easy checkbox to mark off as you ready yourself to commit.

6. Have I removed any debug code or settings I have added?  

This one bites me all the time.  Oftentimes while I’m working, I’ll add debug code or change some hard-coded setting that is intended only to help me troubleshoot or test my changes. Of course, I do not intend to commit these changes, but I’ve done it many times. I’ve found that if I mark these types of changes clearly when I add them, then I can find and remove them more easily.  Typically I’ll add a comment like “//DEBUG” in any area I need to clean up before I commit, and then it’s easy to find and clear out any residual debugging code that should not be committed.

7. Have I considered the scope of my changes fully?  

This is important especially if you are operating in code that you are less familiar with.  A fairly common occurrence for me is to narrow in on a bug in unfamiliar code and figure out a small change that will fix it, make the change, test that the bug is fixed, and then commit (of course only after following all the other steps in my workflow).  The problem that can arise is that even a small change can cause big problems, especially in unfamiliar code.  It’s a good practice to take a step back and look at the bigger picture.  You may find that the code you are changing is used in a wider variety of ways than you knew about. From there you can decide whether you know enough to continue or if you need to get additional eyes on the change before you commit.

8. Are my changes robust?  

Take a moment to consider what conditions could cause your code to fail.  Does it make sense to validate parameters?  Are there any security concerns?  What corner cases are you not handling? Address those before committing.  There is an art to knowing how much “robustness” to add to a piece of code.  If you are unsure if it’s worth spending more time to make your code more robust, you may want to seek advice from other engineers you respect.  Over time you’ll develop an intuition to know how far to take it.

9. Have I had my changes code reviewed?  

Get someone else to review your changes and make sure the changes look correct.  Find the best person you can.  The best person is someone who is familiar with the area you are changing and understands the type of code you are writing.  By doing this, you’ll likely find things that you didn’t think of, even if you are rigorous about following all the other steps in your workflow.  Also, you’ll probably learn new things and thus become a better engineer as a result.

10. Have I considered QA?  

Hopefully your QA process is integrated with your development process and your testers are always aware of new things that are going into your application.  If not, it may be worth looping in a QA person to alert them to the changes you’ve made and to discuss when they’ll be available to test and how to go about testing.  If possible, you may even want to have a QA person give your new feature a trial run before committing it.  You’ll likely get good feedback and will probably discover something you should fix before committing.  Inexperienced programmers tend to harbor enmity towards QA.  Professional programmers realize the value of QA and take advantage of the help they can lend to the development process.

As you visit each step, if you end up making further code changes to address issues that come up, don’t forget to go back and reconsider whether it’s appropriate to revisit previous steps.  Revisiting the steps a second or third time as necessary will help you make sure that what you finally commit is as high quality as possible. Don’t be discouraged if you need to do this. Second and third passes are usually a lot quicker than your first pass through the workflow.  You’ll get better at it in time, and you’ll internalize these steps into your coding process, which will make the commit workflow smoother and quicker.


This blog post serves as a reminder to my current and future self as much as it is meant to help you, the reader.  I often find myself in situations where I feel driven to act quickly to complete an assignment.  My experience tells me that when I rush through and skip steps, it often comes back to haunt me.

Whatever language or framework you are using,  no matter how many algorithms and design patterns you know, having a formal workflow will help you be recognized for your code quality and professionalism and will help you advance in your career.

Posted in Thought Leadership | Leave a comment

The Importance of Cross-Team Communication in Quality Assurance: A Developer’s Perspective

Posted on by Trudy Firestone

At Lucidchart, we take quality seriously. Over the past few years, we’ve greatly improved our test automation frameworks, especially our JavaScript tests, for which we now use the Jasmine framework, and our selenium tests. Our Quality Assurance team has grown, and we’re catching a lot of bugs before they reach customers. But bugs are slippery, and sometimes they make it past these increased testing measures. When they do, we encourage cross-team communication to get fixes out to our customers as quickly as possible.

As our first line of defense, we have an awesome customer support team that is always happy to resolve issues, but if it’s an actual bug in the code, not a misunderstanding over a feature, there’s not much they can directly do. Sure, they can confirm the bug and send in a bug report on behalf of the customer. A product manager sees it eventually, and, not realizing the magnitude of poor experience caused by the bug, they might assign it to a sprint a month away or more. When the issue finally reaches the programmer, it might be as easy as a two-minute fix, or it might be more involved. Regardless, the customer has probably waited a month, if not more.

That’s where communication comes in. By talking directly to a product manager, the support team member can make sure the issue is prioritized correctly, and possibly taken the very next sprint. But the product manager doesn’t need to be the only decision maker on the bug’s relative importance and complexity. In fact, that forces them to make a decision without all the relevant facts.

When the product manager informs their team (at Lucid that’s made up of software engineers and a Quality Assurance Specialist) of the problem that’s happening right then, it’s an opportunity to gather information on how quickly a fix can make its way to the customer. Only someone familiar with all the ins and outs of the code base can give an accurate time estimate for an issue.

Customer issues fall into two categories: quick fixes and long term adjustments. A developer can take a few minutes to let the product manager know where the current issue falls. If the issue is a quick fix, there’s no need to wait a full sprint cycle. I’ve had several issues pointed my way before they have made it onto a sprint, and I’ve been able to fix all but one of them in a few minutes and get a fix out to the customer almost as fast.

As an engineer, it’s easy to get accolades for writing new features, and it’s always fun to work with new languages and frameworks; however, when any part of your product has a bug, it’s the customers who suffer. Without regular communication with support and product management, it’s easy to see bugs as non-existent, or, at the very worst, not really affecting anyone. By having regular discussions with team members from different disciplines, it becomes easier to improve quality and internal understanding of the product.


Trudy Firestone is a Software Engineer at Lucid Software. She graduated from the University of Utah with a degree in Computer Science and loves to use her programming skills to create quality software that others enjoy using in their daily work. When not programming, Trudy loves to watch Doctor Who and weave.

Posted in Thought Leadership | Leave a comment

Retain Users by Building A Great Help Center and Community

Posted on by Mitchell Cox

Customer support. User education. Help Centers. Communities. Often, customer support is cast aside as boring or unimportant. Call centers in foreign countries replace quality user education content, and companies fail to provide forums for passionate users to help other users. This approach is a mistake.

User education is one of the most important aspects of any software product. If users don’t understand how to use your product, they won’t use it. No matter how intuitive a product may be, user education material in the form of help centers and communities bridges the product-knowledge gap for users and allows them to better utilize your product. From there, the math is quite simple: If users can better utilize your product, they will use it, pay for it, keep using it, and invite their colleagues to use it too.

So, how exactly do you build and scale a great help center and thriving user-driven community? Here are a few steps to help guide your work:

1. Write great help center content.

Remember, your users come to the help center when they have a question about how to use your product. They want answers. Give them the answer they so desperately seek by writing clear, concise, and easily-digestible content. Take extra time to ensure that your documentation is up-to-date with the latest UI changes and that the steps outlined actually match what the user will see in the product. Overall, keep it simple. Give users the answer they need, and get them back into the product as quickly as possible.

2. Engage users to develop a healthy user community forum.

Users are the key to a healthy community help forum. Engage users, develop relationships with your top forum contributors, and encourage healthy discussions within your community. Users can provide individual perspectives on the product that your help center content cannot. Your help center focuses on general user education, while your community focuses on specific, niche questions from individual users. Engage users. Provide them ample opportunity to engage with your team. Make your community the central hub for user engagement by creating a space for users to help other users.

3. Focus on design.

Just like any other product, the design of your help center and community is incredibly important. Make design decisions that help users get answers to their questions quickly. For example, add a prominent search feature to allow users to search for relevant articles. Additionally, you can add links to your most popular content to the home page of your help center to allow users to access important content without ever leaving the home page.

4. Identify KPIs and make data-driven decisions.

Data should drive every aspect of your help center, from what content you write, to how to design your help pages, to how you interact with users in your community. Digging into the data will allow you to understand how users interact with your help center and community. In turn, that data will help you make better decisions about how to improve your help center to better serve users. Metrics such as page views, bounce rates, and time on page can tell intricate stories of user engagement, allowing you to optimize user flows and help users access your content more easily.

5. Remember your goal: Help users.

At the end of the day, the goal of your help center and community is to help your users better utilize your product. By writing great help center content, engaging users, designing intuitive interfaces, and making data-driven decisions, you can help users get the most out of your app, leading to big dividends towards your other KPIs.

Overall, the key to creating a great help center and community is quite simple: Treat it as a product! Write great content, engage users, focus on design, measure key metrics, and, above all else, focus on helping users. Focusing on the help center and community as a key product for your company will enable you to create great user experiences and build strong, user-driven product.

Check out our Lucidchart Help Center for more ideas and inspiration.


Mitchell Cox graduated with an Honors B.S. in Psychology from the University of Utah in December 2014, and joined Lucid’s Client Success team after working in chapter development and expansion for Beta Theta Pi Fraternity. Outside of his work at Lucid, Mitchell coaches CrossFit, is passionate about sexual assault education and prevention, and enjoys Utah’s beautiful outdoors.

Posted in Thought Leadership | Leave a comment

How to write an effective bug report that actually gets resolved (and why everyone should)

Posted on by McKay Christensen

I want you to take a moment and make a mental note of all the software you use on your computer or phone. Which percentage of the software did you pay for? 50%? 20%? 0%? Chances are if you’re anything like me, most of the software you use, you got for free. I use almost exclusively open source software. Just because I use free software, however, does not mean that the software did not come at a cost. Thousands of developer hours went into each piece of software I use.

Free or not, good software makes our lives better. That is why we use it. So what can we do to give back to the developers who are adding value to our lives? A thank you email perhaps? Donate via PayPal to the developers (even better)? Become a ravenous fan who tweets and instagrams incessantly about the awesome software?

I would argue that one of the best ways we can support the software we love is by showing an interest in the development of the software by submitting bug reports. So next time you are bugged by a bug (see what I did there?), consider taking a more proactive approach than complaining or throwing your computer out the window, and actually take the time to report the bug.

What is a bug?

Everyone that has used software has run into a bug. According to what someone wrote on the Wikipedia article for a software bug, “A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.”

Most of the time a bug is a source of minor (or major) annoyance. Sometimes bugs are so severe that they can cause us to stop using certain software altogether. While spending the last year doing quality assurance for Lucid Software, I realized that finding and reporting bugs does not always have to be a nuisance; in fact, it can be quite empowering. The reason why I believe that everyone should report bugs is that finding and reporting a bug can empower a user to help make the software they use everyday better.

Before you start reporting bugs left and right, I would like to guide you through some things that I have learned that make reporting a bug effective and consequently increases the likelihood of the bug actually getting fixed.

How to report a bug.

Step 1: Try to reproduce the bug to make sure that it is indeed a bug and not a user or environment error.

This might seem like an obvious first step but I have surprised myself with how many times I would be in the process of reporting a bug then halfway through try to reproduce the bug only to realize it was either a user error on my part or an environment issue. If you cannot reproduce the bug you found, there is a good chance that it is not actually a bug.  

Step 2: Check if the bug has already been reported.

Once you have verified that you indeed have found a bug, you should see if the bug is already documented or reported. For popular software it is probable that the bug you have found has already been reported.

Aside from doing a direct Google search for your specific bug, one thing you can do is go to the bugs page for the software you are looking at and see if the bug has already been reported. Most software you are using will have a page dedicated to finding bugs. For example, if you do a Google search for “photoshop bugs,” the first link that will come up is Adobe’s bug reporting page. If a bug report already exists, that is great. You might even find a solution or workaround to the bug you are experiencing. If you cannot find an existing bug then you can create a new bug report.

If a bug has already been reported, you should not create an additional bug report. You should, however, read through the bug and write any additional comments that might help the developer to resolve the bug.

Step 3: Report the bug (or make a comment on an existing bug report).

Any developer will attest that not all bug reports are created equal. While a good bug report increases the likelihood of the bug getting fixed, a bad bug report can be a waste of time for everyone involved, and can result in confusion and annoyance.

Bugzilla has a detailed list of the anatomy of a bug, detailing fields to be included in a bug report. I won’t go over all these fields, but I will share my personal list of what I think every effective bug report should have.

  • Descriptive title
  • Environment
  • Expected Behavior
  • Actual Behavior
  • Steps to reproduce
  • Demonstration of bug

Note: For all of the examples below I will list an actual bug that I encounter all too frequently while using the (sadly) now discontinued Picasa photo viewer from Google.

Descriptive Title
When you searched for the bug (remember step two) what words did you type to do a search? These are probably the same words you should include in your bug report title so that other people can easily search for and find the bug report. Think of words or phrases that might often be worded differently and include both wordings in the title. Avoid using ambiguous words like “broken” or “not working.” That is implicit in the fact that it is a bug. Specifically mention how something is not working. A well written title can often be sufficient for the bug to be fixed.

Example: Picasa 3.9 in Ubuntu crashes when clicking the link “Sign in with Google account.” The window closes and an error report comes up.

In my example I include the environment and list what is happening. While “crashes” and “window closes” can be synonymous, I include both phrasings just in case someone were to search for one phrasing but not the other. While you don’t want to make the title a long run on sentence, it is good to be descriptive enough so that it is clear what the bug is.

Often bugs only happen in certain environments so it is good to be as specific as possible. Make sure to list the operating system or browser you are using, and if applicable, which version of software and hardware you are using. If you are able to, help out the developer by testing on multiple environments to see if the bug is present in multiple environments or not.

Example: Ubuntu-Gnome version 16.04.1. Running Picasa from PlayOnLinux

Here I make it clear that I am not running this software in Windows.

Expected Behavior
Before writing what the bug is, it is useful to write what you expect to happen. If you just write the bug, the person reading it might not be totally clear if you are describing a bug or the desired behavior. Bugs are often “features.” It can sometimes be a matter of opinion. It is never clear what the bug is unless it is clear what the bug is not.

Example: When I click on the “Sign in with Google account” link, it should open a window allowing me to sign in.

Actual or observed behavior
This is the meat of the bug report and often the only thing that people write when they report a bug. Often, this is just the opposite of what you wrote previously for the expected behavior. When you write the bug, remember to avoid using ambiguous terms like “broken,” “not working,” etc. Make Victor Hugo proud. Go crazy on the detail. A reader can skip reading detail but cannot make up what is never written. If there are too many things that are not working as you would expect them to, consider creating multiple bugs (or a parent bug with sub-bugs).

Example: When I click on the “Sign in with Google account” link, the window closes and I must reopen Picasa. I get an error report that says that PlayOnLinux crashed.

Steps to reproduce
If I had to pick one thing that EVERY bug report should have, it would be steps to reproduce. Listing step by step how to reproduce the bug usually makes everything else clear. Listing the steps to reproduce the bug can make it more obvious what environment you are using, what you expect to happen, and what is actually happening. In my mind, if you have not found a way to consistently reproduce the bug, then you have not really found a bug; you have found a user error. Each step should be documented so that anyone can clearly reproduce the bug you have.


  1. Double click on the Picassa icon in Playonlinux to open Picasa.
  2. In the top right hand corner of the main Picasa window click on the link that says “Sign in with Google account.”
  3. Notice that the main Picasa window closes and I get an error message.

Evidence or demonstration of the bug
I like to record some evidence of this bug. This does a few things: 1) It requires me to be able to reproduce it consistently. 2) It serves as evidence that there is in fact a bug and it is not a tester’s error. 3) It shows a clear picture for the developer to see what is going on. Screenshots with annotations are often sufficient but in cases where there is user input or action, I like to show the whole process in a gif. I try to keep all my gifs under 30 seconds. If I can’t keep it under 30 seconds, I practice recreating the bug until I can do it more quickly, or I break the gif into multiple gifs.



There are a handful of great free programs to help record a gif screencast. My favorite software for this is ShareX (which sadly is only available in Windows). Linux users can use Peek. LiceCap works well in Windows and MacOS and can even be used in Linux through Wine.

One thing to keep in mind when reporting a bug is that the bug report most likely will not be for you. Pay attention to who the audience might be; someone new to the project, an intern, a tester, someone online having the same experience as you, etc.

Additional tips when reporting a bug:

  • Look for an existing bug report before reporting the bug.
  • Proofread any bug report before you submit it. Incorrect grammar or wording can be very confusing and discouraging.
  • Provide as much relevant info as you can. This can include error logs and URLs.
  • Look for an existing bug report before reporting the bug.
  • Be as specific and concise as possible (without leaving out relevant details).
  • Test in multiple environments if you expect it to be an environment issue.
  • Look for an existing bug report before reporting the bug.
  • Avoid opinions. Unless you are submitting a feature request, you should stick to the facts and leave out how you would make the software if you were the developer.
  • Look for an existing bug report before reporting the bug. Seeing duplicate bug reports can be as annoying for a developer or product manager as it is to see duplicate bullet points in this list.

Step 4: Be proactive and follow up

If you really want a bug to be resolved one of the best things you can do is follow up with the bug report (but in a nice, proactive way). After filing the bug report, or in the bug report itself, you can make a comment to the developer and state your willingness to help. One nice thing to do would be to add words of encouragement to the developer to show that you appreciate the software. Something else you can do is offer to test in different environment or even test beta versions of the software.

A developer that feels appreciated is much more likely to fix a bug than a developer who feels annoyed. Remember that when writing a bug report and following up.

Why everyone should report bugs

As I stated earlier in this article. Reporting bugs can empower you as a user to help make software better; it is probably one of the best things a non-developer can do to help improve software. Even before working at Lucid, I often would send emails to developers with bugs that I had found. I was always surprised and impressed with the response that I received. I almost always received a reply and in the end, the developer would either fix the bug or explain to me why it would not (or could not) be fixed.

Someone who sits idly by waiting for bugs to be fixed is most likely someone who will be disappointed. Someone who reports bugs is someone who shows that they care enough to support the best possible product outcome. So the next time you are using software and you encounter a bug, make the extra effort and report it. Being proactive will not only benefit you in the long run, but will also benefit everyone who uses that software and ultimately improve the world of IT.

McKay Christensen works as an IT engineer at Lucid Software after having spent a year doing QA. Before working for Lucid, McKay taught English at an elementary school in China.

Posted in Thought Leadership | 5 Comments

Product Managers: How To Empower Your Engineering Team

Posted on by Matt Swensen

As a software engineer on a small scrum team, I have found that my relationship with the product manager has a significant and direct impact on my effectiveness. During my short tenure at Lucid Software, I’ve already had the opportunity to work with a handful of product managers. Here is what I have learned from my experiences collaborating with them.

Use as little process as possible

While methodologies like agile and scrum can have notable advantages, they need to be adapted and optimized for each team. In general, minimize the amount of meetings and overhead so that engineers can get to programming. Making sure that the entire backlog of stories reflects current estimates is an example of an activity that should be abandoned. Additionally, don’t get too hung up on which roles should perform which day-to-day tasks; for example, whether you or the scrum master engineer clicks “Start Sprint” in JIRA probably doesn’t matter.

Don’t try to squeeze more code out of your engineers

I once worked with a PM who, when engineers would present differing estimates for a particular feature during estimation meeting, would record the lower of the two estimates in the system and move on. This led to over-scheduled sprints, which resulted in lower-quality code and unnecessary stress. This attitude also makes engineers feel like code monkeys, paid only to get a job done as quickly as possible rather than to take the time and creativity required to craft code that will yield greater returns in the long-term. Experienced engineers—and product managers—understand that there is only so much code that can be produced in a given period of time, and they plan accordingly.

A wiser PM whom I later worked with recognized the presence of differing estimates as underlying uncertainty/ambiguity and instead used the higher of the two in his planning and prioritization. Work moved forward much more smoothly (and was of much higher quality) on his team.

Don’t play the “role power” card

In fact, remove that one from the deck altogether. At Lucid Software, one of our core mantras is “teamwork over ego.” The moment a team member needs to assert expertise explicitly, the possibility of an effective relationship of trust diminishes. In fact, one of my favorite parts about working at Lucid is that even though there are people with prestigious credentials and incredible expertise, not one of them has a private office or carries any sense of elitism.

Remember that engineers can have valuable input when it comes to the direction of your product or the details of the user experience. Likely they feel as passionately about it as you do. When a disagreement arises, employ insights from user data or utilize other experts at the company—don’t say “we’re building it this way because I am the Product Manager and it’s my job to make these decisions.”

Have a flexible timeline

Estimating is hard, and humans are notoriously terrible at it. It’s rare that building a feature or user story takes precisely the amount of time that an engineer estimates it to take. Usually, it takes longer than anticipated. Be willing to be flexible when this happens. If every story is treated as an emergency that absolutely must get done immediately, you will start to seem like the boy who cried wolf. If there really is a hard deadline, proper communication with the engineers will often elicit from them the extra effort needed to get it done on time.

Be in sync with the rest of the product team

I once worked on a redesign/rework project that was planned by a PM and UX designer that were passionate about improving a particular feature. A few sprints later, when we were midway through a full implementation, all the project managers in the organization shuffled teams, and the new project manager assigned to my team put the project on hold in favor of other priorities. His concern was that the update we were implementing wasn’t the right approach to improving this feature. A few weeks after that, a different project manager from another team expressed frustration because our half-baked improvement of the feature may have negatively impacted a larger epic that his team was working on. Among the lessons learned from this experience was that many concerns could have been avoided by having effective communication channels in place.


The relationship between engineers and product managers can be as challenging to fine-tune as it is critical to get right. Striking the correct balance, however, will yield greater effectiveness and higher-quality output for the team as a whole.


Matt Swensen joined the Lucid team in the summer of 2013. He loves spending time with his wife and son, developing applications with bleeding-edge web technologies, and playing the drums. He is earning his master’s degree in Information Systems Management at BYU.

Posted in Thought Leadership | Leave a comment

How To Increase Sales with Real Time Email Validation

Posted on by Derrick Isaacson

Many web apps annoy you by making you re-enter your email and password when signing up. Some also make you stop what you are doing to click a link in a test email to verify the address works. How often have you given up at that point, or gotten distracted before getting around to finishing the sign-up flow and never ended up using a product?

Services like confirmation steps because they increase accuracy of user profile data, but they are a real drag on converting users into paying customers. Are we stuck with clunky sign-up flows, or is there a better way?

Step 1 – Keep It Simple, Silly

Over the four years I have been at Lucid Software, we have experimented with dozens of ways to register users. I want to share a recent win that minimizes the barrier to entry without sacrificing much accuracy of user data: real time email validation.

Years of running A/B tests at Lucid show significant dropoffs in conversions when we required users to re-type their emails or passwords to make sure they typed them correctly. The same goes for requiring users to click on a link sent to their email before using the product. So, at Lucid we now KISS the sign-up form to three simple fields:

We go further in one flow even and only require the email address, then gather other profile fields after the user spends a few minutes in the product. A/B testing these simplifications shows significant increases in conversions – both from visits to registrations, as well as from registrations to payments.

Step 2 – Validate Emails in Real-Time

While registrations and payments went up in our A/B tests, about 3% of users then registered with email addresses that bounced. We researched what was happening and found predictable misspellings like the following:

Side note – to see fun marketing hacks, go check out where some of those domains land you.

These users did not get our follow-up emails, could not receive shared documents, and could not log in to their accounts later unless they repeated the typo or contacted customer support. Few ever stuck with Lucid or paid for the premium service.

To solve this, our chief architect pointed us to an interesting SMTP trick that lets someone tell if an email address is an actual inbox without having to ever send an email or make someone click on a link. The SMTP protocol actually gives back a different status (“250 OK” or “550 does not exist”) when you tell it the recipient address – based on the validity of that address. Rather than continuing to send an email you can tell the server “QUIT” and you have validated an email address.

Tutorial on how to try this out here.

Many people warn this is fraught with peril so we use a third-party service that does this type of analysis for us, and also provides other interesting data about an email address like if it is a catch-all address. There are lots of providers out there including,,, and We ended up using a combination of Kickbox and Mailboxlayer (sometimes we double check an address with a second service).

Step 3 – Measure Results

Now when a user types their email in our registration form, we make an AJAX request to validate it and warn the user if they have misspelled it.

We released this as an A/B test of our new registration dialog. 97% of users saw no difference in behavior, but for the 3% of users who entered an invalid email, half saw a warning and half did not. Here are the results of the test for those 3% of users.

We saw a 9% drop in registrations from the control group (A), but 54% of the users who saw the warning (B) successfully corrected their email before registering.

This was not a surprise. We hypothesized that the cost of a small registration drop could be outweighed by gathering correct email addresses for a majority of these users. That proved accurate, as we saw 34% more of this group successfully returning and using the product again within a few week period and 44% more paying in that time frame.

Conclusion and Next Steps

We found that the simple registration form with invisible real-time email validation gave a significant revenue lift. The email validation itself took one engineer just a few days to implement. Currently, 97% of users still have a minimal barrier to sign up, only about 1% of users enter an invalid email for their trial, more users stick with Lucid, and nobody has to re-type addresses or interrupt their flow to wait for an emailed link.

Future work

We tried three different warning messages. Perhaps unsurprisingly, the shortest one won (shown above). We would like to experiment with even more messaging ideas.

Also, the email validation service is slow, taking about two seconds on average from the browser’s perspective. We added optimizations such as firing the request to validate the email when the text box lost focus, as opposed to waiting until the user clicks “Register.” That involves heuristics to deal with the race condition between receiving the validation response and the user clicking register. We plan to test various thresholds for timeouts, delays, and caching of responses to see if we can further increase conversions.


Derrick Isaacson studied distributed computing at BYU and Stanford. Before joining Lucid Software, he worked at Amazon, Microsoft, and Domo. Derrick leads the Lucidchart team and loves working on REST services, security, and recruiting. For fun he cycles, backpacks, and takes his son to the hardware store in his F150.

Posted in Thought Leadership | Leave a comment

7 Steps to Design the Perfect Survey

Posted on by dave

Over my career, I’ve designed a lot of surveys — as a management consultant at Bain & Company, it was everything from a pricing survey for a large entertainment venue, to a customer satisfaction survey for one of the world’s largest fast food chains, to a process design survey for employees at a major engineering firm.  And at Lucid, I’ve sent surveys to literally millions of users to help gather product feedback, develop customer understanding, and more.

Yet, I’m still a bit nervous every time I’m about to click “Send.”  Are we asking the right questions?  Does the user flow make sense?  Have we tested the skip logic and other questions enough?

With that in mind, here are seven tips that have helped me along the way.

1. Establish the goals of the survey and be focused.

This seems obvious. But you’d be amazed at what happens when you share that you are designing a survey.  People and opinions come out of the woodwork. The product team would love feedback on the latest feature. The marketing team wants to know how customers first heard about the product. The customer success team wants to know how often the customer is using the product.

And all of a sudden, there’s a risk that your survey becomes a random compilation of questions with no driving theme.  Set the scope and stick to it.

2. Know *exactly* how you plan to use the data.

At Bain, we created a lot of PowerPoint presentations for our clients.  Part of the process there was to “blank slides.”  In other words, you designed the PowerPoint presentation before you ever sent out a survey.  You knew what every slide would look like with exactly what type of information you needed to fill it.

This is a powerful forcing function. By completing this step, you often realize that the way you have framed a question will result in data which doesn’t cleanly fit what you hope to convey or learn from it.  And you can also realize where the gaps in your potential story or learnings will be, often prompting an additional question or two to be added to the survey.

3. Map out the survey before coding it.

Create a draft of the survey questions and the flow before ever touching survey software. For simple surveys with no skip logic or other intricacies, Google Docs is a likely candidate to draft the questions for your survey.

Most surveys, however, do include skip logic that will jump users to different questions depending on their answers.  In this case, a more visual application like Lucidchart can be perfect to make it immediately obvious how the answers lead to different paths.

4. Gather feedback from the relevant team members.

Both Google Docs and Lucidchart allow easy collaboration.  First, decide which types of permissions to provide to team members. For core team members, consider ‘editing’ privileges to allow them to make changes directly. For other stakeholders, consider ‘comment-only’ privileges to allow them to make suggestions but no changes. And for those whose feedback or approval is needed but who are relatively aloof from the project, ‘view-only’ access may be most appropriate.

Next, send an email to your colleagues requesting feedback. Be sure to state the purpose of your survey so that suggestions are relevant and constructive. You may want to consider asking a series of questions to help your collaborators focus their critiques. Once key stakeholders have had an opportunity to contribute their thoughts and provide necessary approvals, the content of your survey is finalized and you are ready to proceed.

5. Jump into the survey software.

Using the draft created above, enter each survey question and its pertinent answer choices into your survey software of choice. Be sure to add relevant skip patterns and survey logic. This is a great time to refer to the visual flowchart you created to help ensure you’re setting things up properly (as most survey softwares tend to be very text-based).

6. Test, test, test.

Two roads diverged in a yellow wood, but unlike Robert Frost, you must take them both. Make sure that you test every possible survey route to ensure that all questions and skip logic have been entered correctly.  If need be, print off the flowchart or Google Doc you made in step three and follow along with your finger or a pen as you start from question one and work your way to the end of the survey.

Once you have tested the entire survey yourself and have corrected any bugs, you may wish to consider sending it to a few colleagues or helpful interns to perform the same test with a fresh pair of eyes. I often see emails go out to colleagues saying, “Please test this survey for me! Want to make sure I didn’t miss anything.”  Unfortunately that email is typically unaccompanied by the documentation explaining the various paths, so colleagues end up randomly clicking through. Share the flowchart — or at least the text draft.

7. Send out to a small group first and check the data.

Even though your format and flow are now pristine, it’s not time to push the big red button yet. Send out your survey to a small group of customers and monitor the data that is returned. Sometimes a question that seems obvious to you and your coworkers may be misunderstood by your client base and result in poor data. Or if you’re asking a question like, “Why did you cancel your account?” and 90% of respondents pick the same answer, there’s likely an opportunity to split that answer into several so that you reach deeper insight. Make any needed adjustments and retry your test with a new small sample. Once you are satisfied that your results will be an accurate representation of your clientele, you are ready.

Push the button.

And take it from me, your heart will likely still be racing if you’re sending it to important customers (or a million users!).  But take a breath — you’ve followed the right steps — now’s the fun part when you start seeing the data and learning!


Dave came on board shortly after the launch of Lucidchart and has worn nearly every hat on the business and product teams. Prior to joining Lucid, Dave worked as a management consultant at Bain & Company. He graduated summa cum laude from Brigham Young University with a B.S. in Business Management. Dave is a political junkie and tries to keep up with the outdoorsy stunts of the rest of the team (with only an occasional trip to the emergency room).

Posted in Thought Leadership | Leave a comment

4 Books to Boost Your Q4

Posted on by Samantha Nielsen

Stuck in quarter-end rut? Before you start Q4, read these books to boost your motivation and sharpen your skills. These are all pretty easy reads and they all have audiobook options. So without further ado, here are four books to help you kill quota and become a better sales professional.

1. The 10X Rule by Grant Cardone

If you have ever felt like there is more you could be doing to reach your potential, this book is for you. It will teach you how to achieve the success you have dreamed of by working smarter, harder and getting ten times the results. This is honestly one of the most inspiring books I have ever read. Get it, read it and watch it change the way you work. One quote I love from that book describes the type of work ethic it takes to achieve phenomenal success:

“Until you become completely obsessed with your mission, no one will take you seriously. Until the world understands that you’re not going away—that you are 100 percent committed and have complete and utter conviction and will persist in pursuing your project—you will not get the attention you need and the support you want.”

In my current role at Lucid Software, I have seen countless co-workers and leaders apply this in their daily work and habits. From our CEO to our managers, there is a certain passion and drive for excellence that permeates the organization. This drive has been instrumental in our success, and I know that it has been a driving in factor in our ability to grow quickly while remaining profitable.

2. How to Win Friends & Influence People by Dale Carnegie

This one is an oldy but a goody. Anyone who interacts with other humans on a regular basis should read this book. It shares time-tested lessons about how to work with others, build meaningful relationships, and live a happier and more fulfilling life.

“You can make more friends in two months by becoming interested in other people than you can in two years by trying to get other people interested in you.” 

3. The Challenger Customer by Brent Adamson and Matthew Dixon

The world of sales is changing. No longer is there just one buyer in a sale. In fact, most large deals involve nearly six decision makers, all with their own agendas. How do we, as salespeople, get buy-in from all of them at once How do we close a deal with not just one customer from a company, but several? This book helps answer important questions that arise when dealing with so many differing ideas and opinions and is a top pick for anyone looking to drive large deals in their company.

“As we considered the ‘track them down and win them over’ approach, you’ll remember, we found that while winning greater stakeholder access may help, more careful positioning of one’s offering to each stakeholder’s needs actually hurts us—at least in terms of driving high-quality deals. And that finding was really counterintuitive.”

4. Crucial Conversations by Kerry Patterson and Joseph Grenny

Crucial Conversations is the perfect guide for anyone who has high-stakes conversations on a regular basis. It explains how to manage those conversations in a way that will deliver results without offending potential customers. Whether in sales or in your personal life, if you have ever struggled to handle an important conversation, this book is a must read.

“As much as others may need to change, or we may want them to change, the only person we can continually inspire, prod, and shape—with any degree of success—is the person in the mirror.”

If you want to be a master in sales, make reading a priority. You have the knowledge and experience of every great communicator and closer right at your fingertips. Reach out and grab it!


Samantha Nielsen is in Account Development at Lucidchart. She focuses on providing Enterprise level solutions to industry leaders around the world. Samantha is a graduate of BYU’s Information Systems program and is passionate about technology and how it can be used to solve business needs. She enjoys meeting new people, delivering quality solutions, and cultivating business relationships.

Posted in Thought Leadership | Leave a comment