One thing that the model crucially lacked until now is ** being tested against real-world data**. As a scientist, I don’t like to leave something floating without testing it, and I want to see how many teeth it loses when it faces the real world. Hence, I didn’t lose too much time and tried to apply it to the data recently gathered by Barista Hustle. As mentioned earlier, an aspect I did not explicitly include in the equations of the model is the effect of fines. The maths behind them is boring, because I make they get extracted to the maximum as soon as they touch water, but I do include their effect to model real-life data. My model makes no prediction about how many fines there are in a given brew, so their proportion is yet another

For all those of you new to this, what I’ll attempt here is called *fitting a model to experimental data*. It’s a game we often play in science: build a model based on some assumptions, which has a few free parameters (think of them like knobs you can adjust to get a different result). Take a set of real-life data, and try to reproduce exactly the same data by playing with your free parameters. If the model is good, you’ll be able to adjust the free parameters such that the model looks a lot like the data. If you have a really large number of free parameters, you will be able to reproduce any kind of data, and your model becomes very poor at making any kind of predictions – in that situation you will even be unable to test whether your starting hypotheses were good or not.

This might all seem a bit abstract, but I think it has the potential to unlock some important understanding about coffee brewing, which I hope will inform us on new ways we can experiment with brew methods and recipes.

Because we will now explicitly work with the Barista Hustle data, I want to remind readers of what their experiment was. They ground some coffee, and sifted it with a Kruve sifter set up with the 250 micron and 500 micron sieves. They thus ended up with three groups of coffee grounds; those that went through the 250 micron sieve (i.e., with diameters smaller than 250 micron along at least one axis); those that went through the 500 micron sieve but not the 250 micron sieve; and those that couldn’t pass even the 500 micron sieve. Depending on how long they sieved, and how much static electricity was present in the grounds, we should expect that ** some** grounds fine enough to pass a given sieve might not always have passed it, but the global result will still be that they end up with three piles of grounds, which average sizes will be (1) smaller than 250 micron, (2) between 250 and 500 micron, and (3) above 500 micron.

They then weighed the exact same amount for each kind of grounds, and placed them in three distinct cupping bowls. They added hot water, and took samples out of the cupping bowls at distinct times. They then measured the concentration of their samples, and worked out the corresponding average extraction yield in the usual way. They got the following result:

So now, the question is – can my model fit this data ? What we have here are 18 data points, but those at *t = 0* are really a given (no water = no extraction), so we really have 15 data points to model. When you play the game of model fitting, it is very important that you have less free parameters than the amount of data points. In my case, the model has 9 free parameters, here’s what they are:

- The average characteristic time scale of extraction (
*τ*) - The characteristic depth that water reaches in coffee particles (
*λ*) - The maximal extraction yield that this coffee can reach
- The average diameter of particles in the first bowl
- The average diameter of particles in the second bowl
- The average diameter of particles in the third bowl
- The mass fraction of fines in the first bowl
- The mass fraction of fines in the second bowl
- The mass fraction of fines in the third bowl

The first three parameters are required to be the same for all three cupping bowls, because I assumed they use the exact same brew method, and the same agitation in all cases. We also know they used the same coffee and water in all cases. Now, I want to remind you of the hypotheses my model relies on. Any one of these being false will potentially hurt the model’s ability to represent the data.

- The rate of extraction decreases exponentially for each coffee cell.
- The rate of water contact decreases exponentially with the depth of a coffee cell.
- All coffee particles are perfect spheres (the model actually doesn’t change much if you remove that hypothesis).
- Coffee cells are cubic with a side of 20 micron.
- Each bowl contains some fines plus a set of perfectly uniform coffee particles.
- All available chemical compounds get immediately extracted from fines.

We in fact know that some of these ** must** be false (e.g. spherical particles), but it will be very informative to see how well the model fares despite this. Make no mistake: making simplifying assumptions is a very powerful tool in science. It allows you to verify which aspects of an experiment are most important in explaining the outcome, and which aspects have a lesser impact. It is important to bear in mind that the model is a simplified version of reality, but it does not make it useless.

Now, we need to use a recipe to adjust the 9 knobs (the free parameters) of our model in a way that makes it look the most like the data. There are ** several** ways to do this, and this is one aspect of science that I have a decent amount of experience with – a method I really like for this is called a

A *Markov Chain Monte Carlo sampler* is a bit like having a hundred monkeys with their own respective blackboxes, randomly tweaking the knobs and judging if the data fits the model. When the model of one monkey starts resembling the data, it starts getting agitated and yells, which gathers the attention of some other monkeys. They get jealous and notice how the successful monkey has adjusted its knobs, and they try to adjust theirs in a similar way, but they don’t do it perfectly. Some monkeys are harder to distract and they keep exploring very different combinations of knobs, until there is a turnaround point where so many monkeys are yelling that really all monkeys are starting to converge on a similar set-up. This moment is called the end of the *burn-in phase* in technical terms. Once it is reached, you can let the monkeys keep playing with the knobs for a while, and carefully pay attention to what knob combinations they try. At that point, most of what they try will be very close to the best solution. If you gather enough observations despite all the yelling, you will be able to tell what their average knob setting was, and how much they swung each knob around after the burn-in phase. From mathematical considerations, these two aspects will correspond to your *best parameters* as well as *measurement errors* for each parameter value. This method is very powerful at exploring all different combinations of knobs, while paying more attention to the combinations that produce better results.

So, I let my hundred computational “*monkeys*” try 4000 combinations each – the point where all of them were yelling loudly was reached well before the first 1000 combinations, so I paid attention to the last 3000 combinations to determine what the best parameters were. Here’s what the best combination generated:

In all honesty, I was really surprised at this – I expected the model to do much more poorly. You can see that the first cupping bowl with the coarser particles (lower average extraction yields) does not fit as well as the other ones, so whatever effect makes the model imprecise is ** more pronounced for coarser particles**. I suspect this may be related to the fact that each cupping bowl has a distribution of particle sizes and shapes, rather than a very uniform set of particles. Now, let’s have a look at what the best values were for the free parameters, and their measurement errors:

- Maximum extraction yield: 24.0 ± 0.1 %
- Characteristic extraction time: 5 ± 2 s
- Characteristic depth reached by water: 35 ± 8 micron
- Average diameter of particles in bowl 1: 1400 ± 300 micron
- Average diameter of particles in bowl 2: 420 ± 70 micron
- Average diameter of particles in bowl 3: 140 ± 60 micron
- Mass fraction of fines in bowl 1: 49 ± 4 %
- Mass fraction of fines in bowl 2: 50 ± 10 %
- Mass fraction of fines in bowl 3: 50 ± 30 %

The characteristic depth reached by water is smaller than what Matt Perger estimated (100 micron), but I am using an exponentially decreasing reach of water, while he assumes that water accesses the coffee cells equally well down to 100 micron, and then not at all in deeper layers. Matt’s estimation of a 100 micron depth corresponds to the layer where only 6% of water reaches in my model.

The characteristic extraction time is very short, at 5 seconds. This means that, if you were to leave intact coffee cells in a cupping bowl in direct contact with water (i.e., each coffee particle would not have any coffee cell hidden under a surface), you would extract ~63% of all available compounds in just 5 seconds ! This is illustrates of how important it is to consider the effect of coffee cells being hidden under the surface of a coffee particle, as we need much more than 5 seconds to be satisfied with a cupping bowl.

To me, the most surprising parameters were the mass fractions of fines in each cup. They are ** huge**, and almost constant across particle sizes ! I was tempted to make the assumption that each coarse coffee cell has a thin layer, half a cell thick, with broken coffee cells that act like fines. But what we have here is something entirely different: a whopping 50% of all coffee mass seems to be trapped in fines that extract immediately, even in the cupping bowl with coarse particles ! Here’s a hypothesis that I think could possibly explain this – I did not come up with this, but saw a comment that Scott Rao made somewhere about this: a lot of fines may be sticking to the surfaces of coarser coffee particles, possibly by static electricity. It’s also possible that some fines did not have time to migrate through the sieves during Matt’s experiment, even though they were freely hanging out among the coarser particles.

Now, it would be unfortunate to just stop here. The main driver behind why I started worrying about the dynamics of extraction was the question of *flavor profile*. Since we do not have sensory data to play with, the best we can do is approximate it with a profile of extraction yields. Now that I have a model in which I know how many particles there are of each size, and how fast each layer extracts, it becomes possible to build a distribution of extraction yields for each coffee cell. An even more interesting thing we can look at is the extraction yield profile of each drop of coffee in the resulting cup. To obtain this, we just have to look at how much mass was extracted from each coffee cell, and what its corresponding extraction yield was. We can do this for each cup separately, and at each moment where Matt sampled the cup. Here’s what you get at the first sample collection (15 seconds):

Don’t be surprised that nothing shows up at 0% extraction yield ! This is a distribution of what actually landed in the cup of coffee. While a lot of coffee cells were extracted at 0% because they were near the core of a coffee particle, the “brew” that is 0%-extracted just did not contribute to the cup of coffee. What you are seeing here is a combination of two things: (1) a profile peaking at ~21% extraction yield with a long tail to the lower extraction yields, which corresponds to the stuff that was extracted by diffusion (you might recognize these shapes from my last blog post); and (2) a large peak at ~24% extraction yield, corresponding to the fines which immediately extracted by erosion.

There is something shocking to me about this distribution: between 70 and 80% of the liquid in the cup comes from coffee cells that were fully extracted ! This lends a lot of credence to Matt Perger’s claim that high extractions do not necessarily taste bad, as well as Scott Rao’s comment that fines play a crucial role even if you sift your coffee. This is however confusing to me for one reason: where does all the bitterness and astringency come from in over-extracted brews, if a cup mostly extracted at ~25% tastes good ?

While I do not yet have an answer that I find satisfactory to this question, here are a few hypotheses that I’d like to throw out here:

- Maybe the bitter and astringent compounds have a
*much*slower extraction speed and account for a very small fraction of the mass. - Maybe the presence of bitter and astringent compounds is explained by
than high extraction yields. That something else would need to correlate with extraction yield, because we know that astringent and bitter cups have either a higher average extraction yield, or a more uneven extraction yield which caused a larger fraction of the brew to be highly extracted. This is closer to Matt’s explanation that you get bitterness when you “*something else**beat up*” your coffee too much. No offense to Matt, but I’d really like to find a more precise and scientific description of this process - Maybe something is flat wrong with my model, and the fines are not the entire explanation for the very quick rise in average extraction yield in the first sample at
*t = 15*seconds. It would then be surprising that the model reproduces the data quite well.

Whatever the answer is, I think we need to do more experiments like this one. Having a much finer time sampling especially at the start in our data collection, and going to much longer times especially for the coarse cupping bowl, would be super useful to get better constraints on what’s going on here. This was extremely illuminating to me, but as usual in science, it brought a few answers and many more questions !

If you’re curious about the extraction yield distributions at later times, here they are:

I’d like to thank Mitch Hale, Can Gencer, Mark Burness and Scott Rao for useful discussions.

The header image is by Alexandre Bonnefoy at Issekinicho Editions, Strasbourg, France.

]]>As Scott Rao and Dan Eils pointed out a while ago now, we have almost certainly not been calculating average extraction yield in a very accurate way. They describe in this blog post how (1) retained liquid in a V60 brew does not really have a zero concentration, as the standard percolation equation assumes, and (2) the retained liquid should really be divided in two categories. The first category, which they termed *interstitial liquid*, is the water between coffee particles, which concentration *at the exact moment where the brew ends* we want to count in our average extraction yield calculation.

In my last post, I suggested measuring the concentration of the last few drops to estimate the concentration of this interstitial liquid. I think this is more accurate than sampling the grounds after a brew, because there is a risk that the interstitial liquid concentration keeps going up after the brew ended, in a way that has no effect at all on the taste profile of the beverage. Remember that the taste profile correlates with average extraction yield because *how aggressive the extraction was* will dictate the relative abundances of different chemical compounds in the beverage. Therefore we want to calculate by how much the coffee particles were extracted, exactly when the brew ends, regardless of where the concentrated liquid ends up.

Scott and Dan termed the second category of water retained in the coffee bed *absorbed water*; it consists of water that penetrated the coffee cells inside a coffee particle, but never made it out carrying dissolved coffee solids with it. Hence, this liquid should ** not** be counted in our average extraction yield equations, because by definition it has not extracted any coffee compounds.

The direct effect of this *absorbed water* will be to slightly decrease the average extraction yields calculated for immersion brews, or the immersion term (the one that goes as *W*/*D*, i.e., brew water over dose) in the general equation. If we knew the weight of water that remains trapped in coffee particles (let’s call it *W*_{abs} for *absorbed water*), then implementing it in the general equation would be relatively straightforward.

To do this, we would need to link the concentration of retained water (which we call *C*_{last} because we measure it through the concentration in the last few drops) with the mass of *interstitial water* (let’s call it *W*_{int}) and the mass of coffee liquids dissolved in that retained water (*M*_{ret}), instead of that of all retained water, like this:

and then reversing this equation using some algebra would result in:

The fact that we are now counting only part of the retained water in our equation would also change the relation between beverage mass (*B*) and the mass of brew water (*W*). Remember that this relation also included the mass of coffee solids dissolved in the beverage (*M*_{bev}). That equation now becomes:

and now we can use this relation to express *M*_{ret} as a function of more readily measurable quantities. Skipping some of the detailed algebra, we can then express our general equation for the extraction yield (*E*) as:

Remember that *D* is the mass of the coffee dose, and *C*_{bev} is the beverage concentration (sometimes called the beverage *TDS*). In this equation, we also introduced a term *f*_{abs}, which I’ll call the *absorbed liquid ratio*. It is defined in a way similar to the retained liquid ratio, but counts only the part of the liquid that is absorbed by coffee particles and does not count interstitial liquid in the spent coffee bed:

We already know that *f*_{abs} must be smaller than 2 for V60 and most other percolation brews, because the *liquid retained ratio* is approximately 2 and includes both absorbed and interstitial liquid retained in the spent coffee bed.

Now what we need is a bit of experimentation before we can really use the equations above. We should either come up with an easy way for anyone to directly measure *f*_{abs}, or otherwise hope that it does not strongly depend on roast, brew method and particle size distribution. I suspect that using an aeropress or siphon might generate a scenario where *W*_{int} is close to zero because of the suction. If this is the case, then we would be in a pretty ironic situation where the percolation equation would become more accurate for such methods, while possibly not being accurate at all for V60 brews.

If you’d like to view the detailed algebraic calculations leading to the generalized average extraction yield equation above, you can find it in PDF format here.

Mitch has also just updated his universal extraction calculator to include this new *f*_{abs} term.

I’d like to thank Scott Rao and Mitch Hale for useful discussion.

]]>I often compare the average extraction yield of my brews with other coffee geeks. While it’s an extremely useful measurement, I came to realize that we need to be careful when we compare numbers, because people use several different methods to estimate it. I’d like to review some of those methods here, and discuss precautions I think we should take when communicating our measurements.

A summary of this post is available for download here in the form of a cheat sheet with the relevant equations only. I also added it to the *resources* menu.

VST labs provides phone and computer applications to calculate extraction yields, which takes away the need to do the calculations yourself, but even in this scenario it’s really useful to understand how to properly use it, and to understand what the calculations rely on when you use different modes in the application. If you compare your numbers with people not using the VST application, one immediate difference will be that the application accounts for moisture and CO_{2} contained in the bean. Those using simpler approximations of average extraction yield will most likely not be including these correction factors, and as a result your average extraction yields will seem approximately a full percent higher than those of others.

Because of this, I like to set moisture and CO_{2} to zero in the application when I compare numbers with other people. It’s important to keep in mind that this makes the calculation less realistic, but it’s also important to compare apples with apples when you communicate with someone not using the app.

Coffee brews can generally be split between two big categories: percolation and immersion. We’ll discuss these two categories separately, and then we will discuss mixed methods last.

In a percolation brew, fresh water is being continuously added on top of a coffee bed, resulting in an aggressive extraction because fresh water is a great solvent. The coffee bed also acts as a filter, which prevents a lot of very fine coffee particles to end up in the beverage, and therefore results in a brew with less body and more clarity of taste. Brew methods such as the V60, the Kalita Wave, the Chemex, the moka pot, espresso and batch brewers fall in this category. Espresso is the only one among these where it is not just gravity that is forcing water through the bed of coffee, but it is still a percolation brew.

Calculating the average extraction yield is most straightforward for percolation brews, but it requires an additional measurement on your part if you want to be precise. Typically, brew recipes are designed with a coffee dose in grams (we’ll call it *D* below), and a mass of brew water also in grams (we’ll call it *W*). A refractometer allows you to measure the concentration of coffee in %, let’s call this *C*. This is often referred to as the *total dissolved solids*, or TDS. The concentration of your brew is by definition the amount of coffee mass that made it into your beverage (we will call this *M*_{bev}), divided by the total beverage mass (we’ll call it *B*):

There are two reasons why we divided by *B*, and not by the mass of water *W* which was poured over the coffee. First, this quantity *B* also includes the mass of coffee compounds. But most importantly, a lot of water actually never made it into the cup of coffee, and instead remained trapped in the spent coffee bed. The mass of water in grams that each gram of coffee can retain is called the *liquid retained ratio* (often called LRR, we will call it just *L*). Typically, a coffee particle retains twice its weight in water, so in other words its liquid retained ratio is approximately two. By now, we can write the relation between the total beverage mass *B* and the other variables:

The first term on the right hand is the total amount of water poured, to which we subtract the amount of water retained in the spent coffee bed, and to which we add the mass of dissolved coffee solids. In this discussion, we will ignore the effects of CO_{2} and moisture in the coffee bean.

The quantity we want to measure is the average extraction yield (we will call it *E*), and from its definition you might have foreseen that it will be given by:

If that’s what you expected, you are *kind of* right. In reality, we should include the coffee compounds that were dissolved in *all of the water* at the exact moment where the brew ended, because this is the quantity that informs us on the profile of chemical compounds that were extracted from the beans. Whether these compounds ended up in the cup of coffee or in the spent coffee bed, we must count them if we want the average extraction yield to correlate with flavor profile as best as it can.

I know this is counter-intuitive, so let me offer a thought experiment to settle this. Imagine you brew yourself a V60, place the spent bed in a glass, and immediately pour half of your coffee cup in the spent bed. This will artificially bump up your liquid retained ratio artificially, and half both *M*_{bev} as well as *B*. Did you just change the flavor profile of your cup ? You didn’t, but the equation above would tell you that you just halved it, so we know it’s wrong.

The reason why I called this extraction equation *kind of right* is because we ** assume** the water retained in the coffee bed

where *M*_{ret} is the mass of coffee solids dissolved in the retained water exactly when the brew ended. But as we just discussed, *M*_{ret} is approximately zero in a percolation brew.

We already know the dose of coffee because that’s something we specify when we build a brew recipe and (hopefully) actually measure before brewing. What we must now deduce is this mass of coffee dissolved in the brew water, *M*_{bev}. The clue we have to figure it out is the concentraction *C* which we measured with the refractometer. If we combine the first two equations in this blog post, we get:

and we now want to revert this equation to obtain *M*_{bev} as a function of the concentration *C*. This takes a bit of algebra, which I’ll spare you. The result is this:

And we can now directly calculate the extraction yield, by substituting *M*_{bev} using the equation above:

The 1/(1-*C*) factor on the right-hand side of the equation has a very small effect on the calculated extraction yield for filter coffee, typically smaller between 0.2% and 0.4%. What this term represents intuitively is the contribution of extracted coffee mass to the beverage weight, so it is more important when *C* is high.

The equation above is useful if you know the *liquid retained ratio*, or want to approximate it. But in practice it’s more precise and easier to actually weigh the mass of your brewed coffee *B* (just note the mass of your empty mug before brewing). Look how much easier the extraction yield equation becomes, and it’s not an approximation:

Measuring the mass of your brewed coffee makes the calculation of average extraction yield much easier, and more precise ! It’s a win-win, so I really recommend that you always do it. I recommend this even if you use the VST application, because then you don’t need to assume any liquid retained ratio. Make sure the application is in percolation mode, and then you can directly adjust your beverage weight to your measured *B* in the application, instead of adjusting the amount of brew water (which we called *W* here).

Unless you use an unusually fine grind size and filter papers with unusually large pores, syringe filters should not be needed when you measure the concentration of a percolation brew, with the very important exception of espresso (see a recent awesome experiment by Mitch Hale about that). If you want to be sure your particular set-up does not require syringe filters, I recommend measuring your concentration with and without for a few brews, and determine whether they affected the measurement.

In my first blog post, I made the mistake of ignoring water retained in the spent coffee bed when I build a *coffee control chart* that is useful for V60 brews. As a result, my *fixed ratio* (*W*/*D*) curves were offset (this should now be corrected in the post).

Here’s an updated coffee control chart that assumes a liquid retained ratio of 2, which is much more appropriate for percolation brews than the one I had posted in my first blog post:

An immersion brew consists of plunging coffee beans in water (or the reverse) and leaving the same water with the coffee until the end of the brew. Extraction happens a bit more slowly because as water becomes more concentrated, its power as a solvent goes down. The spent coffee is then typically gently separated from the water to avoid drinking it, but typically a lot of fine coffee particles end up in the beverage, resulting in more body and less flavor clarity. Cupping and the french press fall in this category. You may be tempted to think that other brew methods like the aeropress, vacuum pots (also called siphons or syphons) and the Clever Dripper also fall in this category, but they don’t exactly – we’ll discuss these in the next section.

In an immersion brew, most of the technical discussion we already had in the *Percolation* section still holds. The main difference is that you cannot ignore the mass of coffee solids dissolved in water retained by the spent coffee bed anymore, and the approximation that the liquid retained ratio is near 2 can become very inaccurate depending on the brew method. Let’s go back to our full equation for the average extraction yield *E*:

We must now calculate *M*_{ret}, and to do this it is useful to recall that, at the precise moment where the brew ended, the concentration of coffee that will end up in the cup or in the spent coffee bed is the same. We can therefore calculate *M*_{ret} with the following equation:

which can also be inverted with a bit of algebra:

Now if we put together our equations for *M*_{ret} and *M*_{bev} in the extraction yield equation and do a bit more algebra, we end up with:

As you can see, all terms with the liquid retained ratio *L* disappeared ! This means you do not need to weight your beverage or make a supposition about *L*, which makes it easier to calculate the average extraction yield of an immersion brew. Again, the term in 1/(1-*C*) on the right-hand side of the equation is a small correction that has an effect of 0.2% to 0.5% on the calculated extraction yield.

The fact that beverage weight disappeared in the equation above should tell you something about how to use the VST application in immersion mode: you’ll want to adjust “BW” directly (here we call it water weight *W*), rather than the beverage weight, to achieve a better precision.

Syringe filters are needed to measure the concentration of an immersion brew. They all let enough fine coffee particles in the beverage which cannot be dissolved in water, so you will get very imprecise and inaccurate measurements if you don’t use syringe filters in this scenario.

The coffee control chart appropriate for immersions doesn’t need to assume any liquid retained ratio:

There are a few methods that cannot be simply categorized as percolation or immersion, and that are instead better described by an initial immersion phase, followed by a percolation phase where the already concentrated brew water passes through the partly spent coffee bed and typically also a filter to end up in the cup of coffee. Coffee brewed with these methods shares the properties of both: extraction is a bit more aggressive than an immersion brew alone because of the final percolation phase, but not as aggressive as a pure percolation method, because the percolation phase is done with water already concentrated with coffee, that is therefore a worse solvent. Depending on the details of where the filter is placed and what force pushes the coffee through the filter, a varying amount of fine coffee particles, smaller than typical immersion brews, ends up in the cup. Similarly, the *liquid retained ratio* will strongly depend in this force. The brew methods that fall in this category are the aeropress, the siphon and the Clever Dripper.

The main difference between these mixed methods and regular immersions in how they affect the calculation of extraction yield lies in the fact that the concentration in the spent coffee bed is not necessarily the same as in the coffee cup, but it is not zero either. Instead, it is somewhere in between, and will be close to the concentration of water at the end of the immersion phase, just before the percolation phase. Accurately measuring the extraction yield of these methods is more cumbersome and twice as expensive if you use a brew method that allows enough fines in the beverage that syringe filters are needed. Basically, you need to measure the weight of your beverage *B*, the concentration of your beverage (let’s call it *C*_{bev}), and the concentration of your spent bed (let’s call it *C*_{last}). You can measure the latter by keeping the few last drops of your brew in a different container. Make sure to keep at least a dozen drops if you need a syringe filter.

You can calculate *M*_{bev} and *M*_{ret} with the exact same equations as those in the sections above, by just replacing the concentration *C* with the respective *C*_{bev} and *C*_{last} concentrations. There is just one step that is easy to miss, where you estimate the total weight of retained water (let’s call it *W*_{ret}) from the water and beverage weights, make sure you don’t forget the contribution of coffee solids that were dissolved in the beverage:

This will allow you to properly write down the equation linking the concentration to the dissolved mass in the retained liquid:

Add to this a little bit of algebra, and you get the following equation:

Note how setting *C*_{last} = *C*_{bev} will simplify it to the immersion equation, and setting *C*_{last} = 0 will simplify it to the percolation equation, as it should. In other words, the equation above is more general, and includes both of the immersion and percolation cases.

If you are interested to view the detailed calculations leading to this more general equation, you can find them in PDF format here.

This particular equation is not currently supported by the VST application. The closest you can do is assume that *C*_{last} = *C*_{bev} and use the immersion equation. In fact, there are some recipes for which this approximation will be very good; I encourage you to verify this for your particular recipe, and see the difference you get from this equation versus the immersion equation. If you find out that the difference is small, then just use the immersion equation for that particular recipe.

This equation is a bit large, and clumsy to use, so Mitch Hale gracefully created a web tool so that you can use it way more easily ! Please have a look at it here.

Here’s a way to tell if the immersion equation is accurate enough, in one equation:

If that constraint is verified, then you can just use the immersion equation.

Determining whether these mixed brew methods require syringe filters or not will require experimentation on your part. Try measuring your concentrations with and without them for five or six brews, and notice if the syringe filters had an effect or not. With my very limited trials, it seems that a regular aeropress method requires a syringe filter, even if you use two filters. With the siphon, I noticed syringe filters were also needed, at least with the relatively fine grind size I tested and the Hario paper filters. Combining aeropress with the thick aesir filters and the prismo valve with a grind size slightly coarser than typical V60 brews did not seem to require syringe filters. Do not take these as absolute recommendations, but more as an illustration that whether syringe filters are required will depend on several parameters.

As Mitch Hale pointed out recently on his Instagram account, when using a scale precise at 0.1 grams or worse to measure your coffee dose, it doesn’t make sense to report average extraction yields with more than one digit. This is true because effect of your 0.1 grams measurement error on your coffee dose will impact your calculated average extraction yield by about 0.1%, depending on your exact recipe.

When sharing extraction yields, I recommend that you also report all the variables that are required to use the relevant equation, plus the water/dose ratio. In the example of a percolation brew, this means reporting your coffee dose, brew water ratio, beverage weight and beverage concentration.

While this blog post summarizes the concepts behind equations currently used for calculating extraction yields, it is likely not the final answer to how we should calculate them. More than a year ago, Scott Rao posted a very interesting discussion about the limitations of our current assumptions, and how he thinks that the retained liquid in percolation brews are in fact not completely devoid of dissolved solids. I really recommend you read his post, especially if you just went through all of this blog post with a fresh memory of how things are currently calculated. I’ll definitely do some experiments in the future and think about how we can implement Scott’s and Dan Eil’s suggestions.

*Disclaimer:* I was offered the VST Coffee Lab III refractometer for free by Vince Fedele, but I do not have any financial interest related to any coffee equipment.

Mitch ran a very dedicated experiment to compare the precision and accuracy of filtration by VST syringe filters as well as centrifuging espresso samples. The results are very clear: while the precision inherent to syringe filters is as good as the internal precision of the VST Coffee lab III refractometer (0.01% total dissolved solids), not using the syringe filters will give you concentration measurements around 0.38% too high, **and** way less precise on top of that.

Even in a best-case scenario where all coffee roasts and origins have the same amount of oils and suspended solids (this is most likely false), deciding not to filter your espresso sample, and instead subtract 0.38% from it, would result in a very degraded precision of about 0.1%, instead of 0.01%. Hence, I highly recommend that you always filter your espresso shots before you measure their concentration.

Another thing that Mitch concluded from his experiment is that centrifuging allows to obtain measurements as accurate as the VST syringe filters, and that contrary to some popular worries, the syringe filters do not bias measurements by filtering out some of the coffee’s dissolved solids.

Please have a look at his blog post for much more detailed results, and a very detailed description of his experiment.

Another perk from Mitch: he had the very ingenious idea to add a full *glossary* section to his website, which I will now link to in the *Resources* menu of my blog. This way, every time you encounter a weird geeky word that you’re not sure about, you can consult his glossary !

*After receiving some feedback about this post, I would like to address a few things that I think were not clear enough. I’d like to thank Scott Rao for his comments.*

*First, when I use language such as “over-extraction”, and “under-extraction”, I don’t mean that the associated extraction numbers are necessarily undesirable. What I really mean is “more extracted” and “less extracted” – the actual level of extraction that is desirable depends on several things, one of which is the subjective sensory factor. Another is the narrowness of the particle size distribution generated by a grinder, as I mentioned in the post. So, it would be wrong to say that an “optimal extraction is at 21%” for example; the exact number that someone finds optimal will depend on preference, roast development and quality, and evenness of extraction. The extraction yield numbers that I give in the text are just examples that I threw around, please don’t take them as absolutes.*

*It also came to my attention that the evidence for fast-extracting compounds being more on the “vegetal and sour” side of taste is speculative, so please take this claim with a big grain of salt. Instead, it would be more careful to say that low extraction yields will generally produce a less balanced overall taste, because only some fraction of all available chemical compounds get extracted. Think of it like listening to music with a very agressive equalizer turned on. The evidence seems to be stronger on the other side of average extraction yields, in the sense that bitterness and astringency are part of the slow-extracting compounds, and they tend to take a lot of space in the perceived taste profile of a cup. *

*I did not do a detailed consideration of the process of erosion in this post, but it still plays a role even in filter coffee. It is simpler to model because the fines just immediately extract completely in contact with water, so I did not include erosion in this discussion without some data to play with. The amount of fines present in a particle distribution will definitely have a strong effect on the flavor profile of the cup, on top of the size of particles – I will talk more about it in the near future !*

*Finally, please do take this whole model with a grain of salt – It was not yet tested against real data, I assumed spherical particles, and based all of it on the assumption that chemical compounds extract at a rate that decreases exponentially. My hope is that it will be useful to understand **some** aspects of extraction dynamics, but it is in no way a perfect model.*

**]**

Coffee extraction is a subject I’ve touched a few times on this blog. Today I want to have a more profound discussion on this subject, because I recently realized I had a very simplified view of what’s happening during coffee extraction. I’ll go over the basic principles first, and then gradually deeper and deeper in this rabbit hole. This is one of those times where I *will* be posting some equations, but I will try to translate them in words and figures as we go along, so please don’t feel bad if you don’t know anything about maths. I hope to be able to describe them well enough that you won’t need to have a degree in maths or physics to follow the big picture. The value of equations is that they allow me to see what arises from just a few fundamental suppositions.

Specialty coffee brewers often talk about *total dissolved solids* (TDS) and *average extraction yield* (EY) when they describe a method or a coffee they brewed. As I briefly described earlier on this blog, the first concept of TDS really describes the concentration of your beverage: espresso typically has 7% to 12% TDS, and filter coffee typically has 1.3% to 1.45% TDS. The second concept of average extraction yield describes what fraction of the coffee beans were dissolved in your beverage. This number is typically between 19 and 23%, and can never go above ~ 30% because the remaining 70% of the coffee beans is just not dissolvable in water.

At first glance, knowing the average extraction yield might seem to be just another, more convoluted way of describing the concentration of your coffee. But it’s not ! Average extraction yield was found to correlate very well with the taste profile of a brew. If you make three brews with the same coffee, and reach 18%, 22% and 27% average extraction yields, then add the appropriate amount of water such that they all have the same concentration (e.g. 1.3% TDS), the three cups will taste *very* different. The first one will tend to be more vegetal and sour, the second one will be more well-balanced, complex and enjoyable, and the third one will be more bitter and astringent.

Why does average extraction yield correlate so well with flavor profile ? Ultimately, this is due to *different chemical compounds extracting at different rates*. Some of the compounds that we typically don’t like to taste are very slow to extract (thankfully !), so they will start to become apparent only when you reach high extraction yields. Other components that extract very fast are enjoyable, but if they’re not balanced with other stuff they produce a less interesting cup. In other words, our goal is to extract as much of the good stuff (the compounds that extract at *average* and *fast* speeds) as we can, while avoiding the nasty stuff (the compounds that extract at *slow* speeds).

The concept of an *average* extraction yield is useful, but it’s not at all the ultimate descriptor of a coffee cup’s flavor profile. Imagine a situation where some of your coffee grounds extract faster than others – the resulting coffee cup might be composed of some grounds extracted at 18%, and others extracted at 28%, and you could still get an average extraction yield around 23% in the cup. If you were to compare this with a cup where all coffee grounds extracted at 23% exactly, you would most likely find the second cup more enjoyable (this is not the one they sell at *Second Cup*). Basically, the second cup has extracted a lot of the “good stuff”, and very little of the bitter, astringent taste. The first cup however has a lot of coffee grounds that reached a 28% extraction yield, so they will be contributing *some* of the less desirable taste in the cup.

One practical result that arises from this is that lower quality equipment or brew methods that produce a wider range of extraction yields will only allow you to reach average extraction yields around 20-21%. If you go any higher than this, then you will start getting too much of the bitter taste. If you manage to produce a brew where the extraction of individual coffee particles is much more uniform, then you will be able to reach higher average extraction yields, about 22-24%, without getting too much of the bad stuff.

One thing that can explain why your coffee particles may not all extract at the same rate is the fact that they may have *different sizes*. As Scott Rao explains nicely in this blog post, there are two completely different physical processes by which coffee extracts: *erosion* and *diffusion*. Erosion happens when a coffee cell is broken and water can very easily wash away all of the dissolvable compounds that it contains. As coffee cells are very small (around 20 microns), this happens only at the surface of coffee particles, where some broken cells are exposed, or in coffee particles so small that all coffee cells are broken up. In this scenario, water dissolved the full ~30% of anything that can be dissolved *very fast*. As you may have guessed, erosion is the dominant process in espresso or Turkish brews, because those use very fine grind sizes.

Diffusion is the process that dominates in filter brews. In this scenario, water has to enter the tiny pores of the coffee cell walls, dissolve the flavors, and come back through the tunnels. As you might expect, diffusion is *much* slower than erosion. In this post I will focus more on diffusion, because filters brews are my bigger focus at the moment.

Now comes the part I did not understand very well until very recently. One thing I mentioned earlier on this blog was that smaller coffee particles extract faster than the larger particles. This was actually *kind* of true, but my reasoning was not. I was really confusing the extraction of a *single* coffee particle with that of a *population* of coffee particles. If you have a collection of very coarse coffee particles, they will collectively extract much slower than a collection of very fine coffee particles, because the finer particles are presenting *much more total surface* for the *same total mass* of coffee.

If you look at a single coarse particle and a single fine particle however, and measure how fast they provide flavor compounds, then the picture is quite different. The single fine particle is much lighter, and has a much smaller total surface than the single coarse particle, so it is actually the coarse particle that would win the race to higher concentrations. Assuming the fine particle is large enough that we are still within the regime of diffusion, each cell at the surface of the fine coffee particle is extracting at the exact same speed as each cell at the surface of the coarse particle.

The last paragraph is really key to understanding why I have been thinking a lot about this lately. It’s worth reading it again and make sure you understand it well. Once you do, something might become clear to you: a population of finer grinds will reach higher beverage concentrations faster, because you have a large number particles and they collective provide coffee compounds faster than a collective of coarse particles, because of their larger total surface area. Our picture of *how TDS depends on grind size* is quite clear.

**BUT**, once you accept that each coffee cell at the surface of each coffee particle extracts the same way and at the same speed *regardless of the particle size*, then it becomes entirely mysterious why different grind sizes or different particle distributions would produce different uniformities of extraction yield, and different taste profiles ! If this was the whole picture, then the only thing we would ever care about would be the beverage concentration (in % TDS), and all coffee cells would always be providing us with the same flavor profiles whether they are attached to a large or a small coffee particle.

I think the key to understand the link between the distribution of particle size and the distribution of extraction yield is something else: *deeper layers of coffee cells extract slower than surface layers*. Imagine you had only two layers of coffee cells that can be reached by water, and the deeper layer extracts much slower than the surface layer. Now imagine you have two spherical coffee particles, one that is just as large as two layers of coffee cells, and one that contains thousands of layers of cells. Let’s draw this:

It might become obvious from this drawing that the amount of *second-layer* cells is much smaller than the amount of *surface cells* in the small coffee particle. In the case of the very large particle, they’re almost equal ! This immediately provides a way to understand how different-sized coffee particles are providing different flavor profiles. The small particle will be producing a more uniform extraction yield, because it is composed of one surface layer extracting uniformly, plus a small contribution of a deeper layer that extracts slowly. The combination will be a little bit non-uniform. It will be skewed slightly on the low extraction side, because of the small contribution of these second-layer coffee cells. The larger coffee particle will produce a *much less uniform* extraction, because the contribution from the slowly extracting second-layer cells is as big as that of the surface cells. Once again, I think this might be easier to understand with a figure:

In real life, water is able to reach a bit deeper than two layers of coffee cells. In one of my earlier posts, I discussed a recent experiment carried out by Barista Hustle, which demonstrated that water can reach down to approximately the 5th layer of coffee cells on average. If you haven’t watched their video, it’s worth it – this is what made me realize that I was misunderstanding the details of extraction.

So now, we saw that each size of coffee particle produces a distinct profile of extraction yield, and therefore a distinct flavor profile. We also saw that coarser particles inevitably produce less uniform extractions. You can now see why using a grinder that produces a very wide distribution of coffee grind sizes might be a problem: you are mixing up lots of different flavor profiles. However, this new way of thinking about extraction might also have you realize that a *perfectly uniform* particle distribution will ** not** produce a perfectly uniform distribution of extraction yield !

Instead, such a perfectly uniform particle distribution would just produce exactly the same extraction yield distribution than a single coffee particle would – and it is not uniform. Still, the final extraction yield distribution will be tighter if your particle size distribution is also tighter, which is desirable. It still came as a shock to me that even with a light years-wide roller mill grinder, you will not obtain a perfectly uniform distribution of extraction yields, unless you also use coffee particles that all contain exactly 2 x 2 x 2 intact coffee cells (OK, maybe you can do this if you have such a large grinder).

There’s also another consideration about grind size which I did not touch in this discussion: coffee waste. The coarser you grind, the larger will be the total mass of coffee that is inaccessible to water. This means that, in addition to changing the taste profile, grinding coarser is in some way similar to also using a smaller coffee dose. I won’t discuss this more in this post, but it’s worth remembering it.

Now that I’ve tried to lay out the concepts with hand waving explanations and drawings, I’d like to attempt formalizing it with equations. Those not too versed or interested in maths may find the rest of this post anywhere between boring to insufferable. I find it really interesting to be able to write down equations to describe a system and see where it leads me. Often, this is a way to realize some consequences that you may not have foreseen, and I think *some* of you will find value in the figures below (or even in the equations).

The first assumption I will base this formalism on is that each of the chemical compounds in a coffee cell gets extracted at an exponentially decreasing rate:

In this equation, *m_i* is the amount of mass extracted from a chemical compound that we would call “compound number *i*“, *t* is the amount of time since the beginning of the extraction, and *τ_i* is the characteristic time needed to extract the compound: it’s larger for the more slowly extracting compounds. The left-side of the equation is a *time derivative*, which means that it describes the rate of mass extraction per unit of time. This might seem like I pulled this equation out of nowhere, but it’s something that arises quite often in these kinds of problems: there is initially a lot of different ways for water to enter in contact with large amounts of the solvable compound, and the least of it remains in the coffee cell, the slower the extraction rate becomes. I’m not convinced this is the ultimate way to characterize this problem, but I think it’s at least a good one.

This equation tells us about the *rate of increase* of the compound, but what we really want to know is the amount of extract that ends up dissolved in water as a function of time. To obtain this, we need to solve the equation above (which I won’t do in detail here). The solution is:

where a new constant *M_i* was introduced, representing the total mass of this particular compound inside the coffee cell. At this point, it would be worth visualizing what this equation looks like:

Now, given that each chemical compound extracts at its own speed, obtaining the total mass of *everything extracted* requires you to take the sum of the equation above, for all available compounds:

Now, what does the sum of lots of different extraction equations like those look like ? It’s really hard to tell if you make *no assumption at all* about the collective properties of the extraction rates *τ_i*. One way to go around that is to do it numerically, or something else we can do is ask what the result looks like if all the extraction rates *τ_i* are close to one another, and thus close to an average extraction rate *τ*. A mathematical way to express this is:

Here, *τ* is the average characteristic extraction time, and *ε_i* is just a symbol I decided to use to express the small deviations around the average, for each compound. It may seem weird that I defined this equation with respect to the *inverse* of the extraction times, but it will make the subsequent maths easier. Now, I need to make the approximation that the deviations are very small with respect to the average:

And this will allow me to simplify the equation for the total extracted mass as a function of time, with a neat trick that physicists love, called a Taylor expansion around *ε_i = 0*:

The technical term for what I just did there, besides annoying most of my readers, is a first-order approximation (literally, not just figuratively). You might notice that the first term on the right part of the equation is very similar to the equation we had for a single species, with *τ_i* replaced by the average *τ*. This is very neat, because it tells us that this very similar equation is a *zeroth-order* approximation of the real solution. This means that it captures the largest portion of the answer, as long as the *ε_i* factors are small like we first assumed.

The second term, which looks a bit more complex and still has this big Σ symbol that represents the sum of many terms (i.e., all the chemical compounds), is a *first-order perturbation*. If you add it, your answer will be more precise. There is an infinite number of smaller and smaller terms that you could add, which would make your answer more and more precise. If you added this infinity of terms, the solution would be valid regardless of whether all the *ε_i* are small or not. It turns out that the zeroth-order approximation is *quite* good (to 1% precision) if you have a lot of chemical compounds (at least 100) even if some extract ~15% faster or slower than the average:

Now that we have described more formally what happens to a single layer of cells, we can turn our attention to the more general case where there are more than one layers. Without any detailed experiment, we need to make an assumption about the rate at which water is able to access the deeper layers. Intuitively, I see water diffusing in the coffee particle like an ensemble of small creatures that walk around randomly, and have a very small chance of getting through a *door* which leads to a deeper level of cells, or one that leads to a shallower level. In order for water to grab some compounds from the deeper layers and bring them back out, it will need to be able to pass back and forth through several doors.

What this kind of scenario tends to produce is *also* an exponentially decreasing access to the deep layers (by this point, do you think I love exponentials ?). To be sure about that, I decided to actually run such a simulation, where I took a million “*droplets*” of water that have a 0.1% probability of crosser over to a deeper or shallower layer at every step of time. I ran the simulation for ten thousand time steps, and every time a droplet came out of the coffee particle, I asked it how deep it reached. I then made a figure with the distribution of depths that each droplet reached:

In other words, the deeper layers will extract exponentially slower. If we decide to call *s* the coordinate that points inward to the deeper layers, then the characteristic time of extraction *τ_i* will be the combination of an intrinsic rate *τ’_i* dependent on chemistry, and the depth *x*:

The new parameter *λ* represents a characteristic depth before which most of the extraction happens. If it’s small, then the extraction will only happen in a very thin shell of the coffee particle, and if it’s very large, some extraction may happen deep into its core. In reality, thanks to Barista Hustle we know that this parameter *λ* is probably of the order of 100 microns.

This new level of complexity means that we need to sum the extraction equations over all depths, each having their own extraction speed. The result is:

In that equation, *R* is the radius of a *spherical* coffee particle and *s* is the typical length of a coffee cell (about 20 microns, we assumed the cells are cubes). The index *k* is representative of the layer, where *k = 0* is the surface. There’s a term in *(R – k s)* squared that appeared, which is due to the geometry of a spherical particle; each level deeper has a smaller amount of cells in it. For non-spherical particles, this term would be a bit less steep. The spherical case scenario is the most dramatic one, in the sense that the deep layers are slowest to extract (this is because spheres have maximal curvature).

There is a way to make the equation above a bit more easier to deal with, by assuming that the cell layers are continuous instead of discrete. This is not true in real life because there are no “half-cells”, or fractions of cells, extracting in their own particular ways. However, I believe a continuous model would be *more* realistic because it would be more similar to the results of irregular layers of cells, where not all cells in a given layer are *exactly* at the same depth, or have *exactly* the same number of entry ports. These small random deviations in the exact extraction speed within a given layer of cells will produce a similar effect to the assumption of continuous layers of cells. Thus, here is the continuous version of the equation above:

In this equation, *x* represents the depth inside the coffee particle (expressed in the same units as the radius *R*). One may be tempted to *solve* this integral and find a form for the extracted mass versus time that is easier to work with, but don’t go there – you would encounter a dreadful beast that has many names, one of which is the *confluent hypergeometric function*. It takes *three* **sets** of arguments, and is a real nightmare to deal with (to all readers that are thinking right now “*what the hell is this guy rambling about*“, I apologize).

Now that we re-framed *m_i* in this way, it now represents the full extraction output of chemical compound number *i* for a full coffee particle, instead of just a coffee cell. Let’s see how its rate of extraction would be affected by the fact deep layers extract slower:

Basically, the overall extraction for the spherical coarse particle is slower, and reaches an inflection point where this become *really slow* near 3 times the characteristic extraction time *τ*. If you’re willing to, go re-watch the Barista Hustle experiment, you might notice this red curve looks a lot like the cupping bowl that contains coarse coffee grounds !

Now, another interesting aspect is to estimate the contributing fraction of a fast-extracting compound to the beverage, as a function of time. Obviously, the concentration of this compound will be at its highest when the brew just started, because it had a head-start from being a fast-extracting compound. Here’s what this would look like:

… and if we did the same thing for a slow-extracting compound:

The figures above show how even this contribution of a given chemical compound to the cup’s flavor evolve differently for particles of different sizes. One way to go even deeper is to look at the distribution of extraction yields per coffee cell, in terms of their contribution to the total beverage by weight. Let’s look at the result for four different particle sizes, and three different brew times:

There are a few things we can learn from these figures:

- Different-sized coffee particles provide different flavor profiles to the cup.
- The highest extraction yields are always the top contribution, regardless of particle size (they correspond to the collective outermost layer of all particles).
- A coffee cup made with a perfectly even distribution of particle sizes will not taste like a set of perfectly evenly extracted coffee cells.
- Differences in flavor profiles for different particle sizes should be more stark for shorter brew times.

There’s another interesting thing we can look at with this model: How does the average extraction yield depend on particle size ? To do this, I ran a simulation at ten different brew times, from one to ten times the average extraction yield:

There’s nothing shocking here: smaller particles extract faster, and longer brew times lead to higher average extraction yields. The more interesting part is that these curves are not easy to reproduce with a simple equation. Even if we were to look at the *speed at which* each particle extracts as a function of its radius, or surface, the result is not a nice power law, like the naive “extraction speed” = “1 / particle surface” assumption I once made in the past. Rather, it’s a relatively complex functional form, that needs to be modelled properly instead of approximated !

The models I developed in this post will be used to translate particle size distributions into distributions of flavor profiles (via the distribution of extraction yields), in a future application I will release soon. To the hardcore geeks that made it this far in the blog post, I congratulate you two !

I’d like to thank Mitch Hale for a discussion that helped me put some order to these thoughts, and Noé Aubin Cadot for helping me with figure out some *Mathematica* stuff.

As you might imagine, I really like this scientific-minded approach to blind tasting. Unfortunately, the answer to this question is not that simple, and we must plunge into combinatory statistics if we want to answer it. I won’t do this here, but I will provide you with a way to get an answer *without caring* about combinatory statistics. I’m sure most of do not care about the long, detailed equations.

Even if you don’t care about maths, I would like you to read a few paragraphs below that I think are super important to understand, so please bear with me for a bit longer. I promise you won’t encounter any more equations.

A common theme to all problems of mathematics and physics is that a question must be posed very precisely before we can answer it. This is often the hardest part of a problem: formulating it precisely and correctly. The way we posed the question earlier is not precise enough to start doing maths with it, because we need to specify what we mean exactly by “statistically significant”. To do this, we also need some reference point. We want our experiment to be better than something, but better than what ?

One neat way of setting up the problem is by adopting the frame of mind of *classifiers*. A person trying to identify the intruder coffee among three cups can be called a *classifier*: it can be a very efficient classifier, succeeding every blind taste, or a very bad classifier, randomly selecting a cup because it is unable to taste anything different in the three cups. There’s also a third possibility, which is more rarely interesting: someone could be a *misguided classifier*, by *always* identifying one of the two wrong cups as the intruder. If you think about it, this is even worse than a random classifier, because the random classifier will be right at least a fraction of the time.

Now that we talked about classifiers, it becomes easier to ask the question more precisely. As a first step toward this, we can ask instead something like “*Am I better than a random classifier at this ?*“. This is a step in the right direction, but it is slightly incomplete. Let’s take a simple example: You did the blind tasting test three times, and successfully identified the intruder coffee twice. The third time, you chose the wrong cup, and therefore you failed. Is the random classifier better than you ? Well, it will be *sometimes*. If you ask a random classifier to repeat this experiment of three tastings a dozen times, it may beat you a few times by identifying the correct cup at least twice, and then it may do worse the rest of the time.

An even better way to pose the question is thus: “*What fraction of the time will I beat a random classifier ?*” This is now a question posed precisely enough that statistics can answer. Obviously, you will want this fraction to be high ! For example, if statistics tell you that you are better than a random classifier 99.9% of the time, you should be happy about it. If you are better than it only 50% of the time, this is *not* great news. You might now realize that there is a *subjective* aspect to the way we interpret this score. There is no universal laws of nature that tell you: “*You must be better than 99.9% of random classifiers in order to be a **good** taster*“. What does “good” mean ?

This is a problem we must embrace, because we are stuck with it. Physics, Chemistry and all other fields of science are also stuck with it. *How confident do you need to be before you think something is probably true ?* This is a fundamental question, and different fields of science adopted different goals of confidence. As an example, the field of astrophysics decided that a confidence of 99.7% is cool. The field of particle physics decided to be more conservative, and decided they want to be at least 99.99994% confident before they change their minds. There is probably some sociology playing a role in this decision, but it is also certainly in part related to how precisely we are able to measure *stuff*. Particle physicists have big labs and can design experiments in them – astrophysicists are stuck lightyears away from their experiment, and all they can do is watch.

Talking in terms of probabilities like 99.7% or 99.99994% is a bit impractical, unless you really enjoy counting decimals. Fortunately, there is another way to describe this, with a very simple number that you can view as a *score*. In technical terms, this is called an “*N-sigma significance*“, but you can now safely forget I ever said that. Just think of it as a score, and you want it to be as high as possible. Let’s visualize a few different scores in a table, and translate them to % of confidence:

Score | Confidence |

1 | 68.3% |

2 | 95.4% |

2.4 | 98.4% |

3 | 99.7% |

4 | 99.994% |

5 | 99.99994% |

Here’s what I suggest: let’s try to reach a score of at least 2 when we do blind cupping experiments. This means we will draw wrong conclusions only 4.6% of the time, and it will not take a crazy amount of tasting ability or repetition to reach this. Obtaining a Q-grader license requires correctly identifying an intruder cup amongst three in at least five out of six trials, and this corresponds to a score of 2.4. I won’t suggest that everyone should aim at Q-grader level scores all the time.

Now, let’s talk about designing a blind cupping experiment. Choose a number of identical cups, maybe you would like to do three cups like my friend. Now fill all but one cups with the same coffee, and fill the last cup with a coffee that is different *in some way*. Maybe you want to see if you are able to recognize this different origin, something new you tried with roasting, or a different type of brew water. The more cups you use, the harder the challenge will be, and you will thus get to higher scores faster when you succeed. Mark the bottom of each cup with what they are, ask someone else to swap them around, and then try to identify the intruder cup without looking at the tags. Once you think you found it, look at the tag underneath, and mark on a sheet whether you succeeded and failed. Do this a dozen times and log or results; I’ll help you decide what your score is.

One thing you ** absolutely cannot do** when you design this experiment, is to decide after 5 tastings, you are failing too many times, let’s just start from scratch. This is how the fields of psychology and biology got themselves into a crisis where a large number of their experiments were false.

For those of you who do not want to think further about maths or science, I built a *Wolfram Alpha* widget for you. Just enter how many cups you are using in each tasting (“*Number of Cups*“), how many times you tasted (“*Number of Trials*“), and how many times you failed to identify the intruder cup (“*Number of Failures*“). Then press “*Submit*“. You will get then see some ugly equation stuff that I was unable to remove from Wolfram’s output, but just focus on the number at the end. This is your score.

For the default values (3 trials, 3 cups, 1 failure), your score would be 1.1. This is a really bad score – you will certainly need to do more than 3 tastings if you want to be confident about your experiment. If you reach 8 tastings with only one failure (with 3 cups each time), then you will reach a score of 2. If you however fail twice, you will need even more successful tastings to reach a score of 2.

If you never heard about *Wolfram Alpha*, it’s a wonderful website. It’s like a robot version of Wikipedia that can do maths. You can ask it really silly stuff, like “*What is the average life span of donkeys ?*” and it knows the answer surprisingly often. The kind of questions you probably ask yourself every day.

I’d like to thank Victor Malherbe of the Montreal Coffee Academy for asking me this question on statistics, and for the information on Q-grade requirements.

]]>I found this video rather illuminating, but at first the last minute or so was a bit confusing to me, so I decided it may be worth discussing it here.

The most illuminating part for me was the fact that a very long immersion brew with coarse grounds never reached the higher extraction yields that the finer coffee grounds reached. Here, this higher extraction yield is about ~25%. To those wondering why the fraction of extracted coffee is not higher, it is because ~30% roughly corresponds to the fraction of coffee beans by mass that can be dissolved in water (this is discussed a bit more in a previous post). The rest consists of cellulose walls and other stuff that cannot be dissolved, and remains in the coffee bed. This maximum extraction yield will depend on the type of coffee beans and the roast profile you used, so you may sometime hear slightly different numbers.

When you think about it, it makes a lot of sense that water is just unable to reach the core of each coffee particle when you use a coarse grind setting. The typical size of a cell inside a coffee bean is about ~20 micron (see image below), and when we brew for a V60 filter coffee we typically have a majority of particles with diameters around 500 micron. This means that water would need to diffuse through 12-13 layers of cells to reach the core of each coffee particle. What this video from Barista Hustle demonstrated is that water only penetrates about 100 micron, or ~5 layers of coffee cells.

This means that we are calculating average extraction yields wrong when we grind anything coarser than ~200 micron in diameter, as Matt also mentions in the video. This is because we assume that the full mass of our coffee dose is being extracted when we calculate extraction from total dissolved solids, but in reality there is a large portion of the core of each coffee particle that is still intact, and effectively wasted. This is interesting, but does not provide an easy way to correctly measure the extraction yield – to do this, you would need to know the full distribution of coffee particle sizes you are brewing with.

The part I found a bit more confusing was the end of the video. There, Matt mentions that their cupping bowl made with the coarser grounds was actually a high extraction. When I first heard this, I thought he meant that it was a *maximally* extracted thin shell around the coffee bean, which would mean that the nasty bitter and astringent chemical compounds should have come with it. But when I thought a bit more about this, I realized this is not what he meant. The red (higher-extraction) curve in the video is actually a good representation of how much *every part of accessible coffee* is being extracted. If the fine grounds (red curve) are not reaching extractions high enough to obtain bad-tasting compounds, then none of the accessible coffee mass is. The shells of coarser grounds will get extracted exactly to the same extent as the fines are, you will just be wasting a lot of intact coffee in their core.

At the end of the video, Matt briefly mentions that it is still possible to over-extract coffee and obtain a bitter and astringent cup. This problem will certainly arise if you are trying to extract the full core of a coarsely ground coffee. When you do this, the outer surface of the coffee ground will be super over-extracted by the time you start extracting the core. In effect, this lone, coarse coffee ground will be producing an uneven extraction, a lot like a wide distribution of particle sizes would !

Suppose you had a large number of spherical coarsely ground coffee particles, all with exactly the same size. You would be faced with a choice: either you extract a thin shell of coffee around each particle in a relatively uniform way and waste a lot of coffee in the cores, or you minimize waste and produce a very uneven extraction.

All this discussion made me appreciate more why cuppings produce very balanced flavor profiles, typically produced by even extractions – finer grind sizes are used for cuppings, compared to other immersion methods like the siphon and french press. This makes me want to experiment with my siphon at a much finer grind size !

[EDIT January 23, 2019: For those interested, you can calculate the mass fraction of a coffee particle that is accessible to water with the following equation, which was derived from simple geometry:

where the [depth] was demonstrated by Barista Hustle to be approximately 100 micron. For example, if you have a coffee particle with a diameter of 1 millimetre (or 1000 micron), the fraction of mass available to water is 48.8%.]

I would like to thank Mitch Hale and Caleb Fischer for the long and very interesting discussion that led to some of the thoughts shared here. I also want to thank Barista Hustle for their very illustrate experiment !

]]>*The history of The Spin is murky…although it’s often called the “Rao Spin,” I did not invent the spin. It’s likely that James Hoffmann was the first person to spin the slurry. Almost everyone to whom I’ve shown The Spin has immediately adopted it. It’s easy to execute well and it works, pretty much every time.*

*Jonathan Gagné, an astrophysicist based in Montreal, came to my roasting masterclass this past November. I’ve been fortunate to befriend Jonathan, as I’ve always wanted to have an astrophysicist on speed dial to call when I have a question about how things work :). I’ve been helping Jonathan with his coffee making and he’s been providing some great coffee-analysis resources, some of which I hope appear on this blog.*

*I asked Jonathan to explain why he believes The Spin works and we decided to publish his answer as a guest post here.*

*In this post we will discuss the physics behind why spinning the V60 during a brew is a useful method to obtain a more uniform extraction. While spinning is helpful, it’s important not to overdo it – it can cause fine coffee grounds to migrate to the bottom of the slurry and clog the filter, slowing the drawdown and imitating a brew made with a lower-quality grinder.*

Spinning the slurry during a V60 brew is useful to minimize the channeling of water that can lead to an uneven extraction. The reason why this is true can be understood with the help of physics.

A rotating slurry will experience a centrifugal force*, which means that every drop of water and every particle of coffee will suffer a force that pushes them outwards. In physics, the strength of centrifugal force is more important for heavier objects, and because of this, water will tend to migrate outward more than coffee because water is heavier.

When you brew coffee, the main cause for channeling is that dry coffee repels water more than wet coffee does. The physics behind this effect are not fully understood: they are related to the fact that molecules of water bond with each other, and dry coffee doesn’t bond with them in the same way. At first pour, water might begin travelling through a tiny hollow on the surface of the dry coffee, and then it will prefer to keep traveling through that same tunnel, because the rest of the coffee bed is still dry and repels water. In practice, a coffee bed will often develop several channels if you don’t take steps to avoid it.

When you rotate a channeled coffee bed, the water flowing down the narrow tunnels is forced out of them by the centrifugal force, and the water will wet some of the dry coffee.** **This horizontal re-mixing of the slurry will cause channeling to decrease overall.

There is, however, a drawback if you spin too much. As we mentioned earlier, heavier things are more affected by the centrifugal force. The largest coffee particles will thus experience a stronger pull toward the walls of the V60. In a slurry where coffee is mixed with water, this effect will be slightly reduced by water friction. Think of trying to run in the sea – the friction water exerts on you will slow down your movement, especially if you present it with a large surface, for example by wearing saggy pants. The friction is however not strong enough to completely stop the migration of particles based on their size, and the larger coffee particles will be sent outwards.**

This whole situation presents the smallest particles with an opportunity: the larger ones having moved out of the way, fines will sink down to the bottom of the V60, where they will be free to do their worst at clogging the paper filter. This will significantly slow down the flow of your brew.

As an illustration of this, I recently brewed a few V60s with a prewet-plus-two-pours method, performing a spin right after the prewet, and right after each of the two pours. At first I did not pay too much attention to how long or how strong I was spinning, and I experienced large inconsistencies in my brew time (up to ~20 seconds), which led to inconsistencies in average extraction yields by about 0.7%. I was controlling everything else, including the height from which I poured, the flow rate, timing, grind size, slurry temperature, etc.

I then tried timing my spins, and found that using seven-second spins resulted in a 5:18 drawdown time, while two-second spins resulted in a much shorter 4:28 drawdown time! This is a nice demonstration that fines can migrate and clog your filter if you spin too much. Adjusting the grind size appropriately to maximize extraction yield and avoid astringency, I found that the two-second spins resulted in a brighter and more enjoyable cup.

In summary, you want to spin just enough to break the up channels, but not so much that fines clog the bottom of the filter.

** I can already hear the interwebs shouting “CENTRIPETAL NOT CENTRIFUGAL”. Both concepts are valid and useful tools: when you stand outside of a rotating system and want to describe forces acting on that system that keep it together, the concept of a centripetal force (directed toward the center) is appropriate. It describes the external force that allows the system to keep going in this rotating motion without splattering everywhere around. In our case, this force is provided by the walls of the V60, preventing the slurry from flying around and messing up your counter. If, however, you take the point of view of the things rotating (the water and coffee), then the concept of a centrifugal force becomes very useful. You can then describe the system as if it was not rotating, by just adding a slight modification: you add an artificial “pseudo force”, also called an “inertial force” that points toward the outside, in our case the “centrifugal force”. It is often called an “inertial force” because it arises from the fact that your frame of reference (the V60 in this case) is rotating (in technical terms, it is “not inertial”). A “pseudo force” is by no means a false thing or an invalid concept, as long as you understand where it arises from and use it carefully — in fact, one can even see gravity as a pseudo force (Einstein realized that), yet it is very useful in everyday situations to view gravity as just a normal force.*

** *The mass of a coffee particle is proportional to its volume, which is itself proportional to the cube of its size. The water friction that the particle experiences is proportional to its surface, or to the square of its size. As a consequence, a particle three times larger will be nine times more massive and will feel nine times more centrifugal force, but only six times more water friction. If you combine the two effects, it will therefore be pushed outwards 9 – 6 = 3 times more.*

In my first post, I mentioned how the water you use to extract coffee has a significant impact on the taste profile of your cup, in a way that does not necessarily depend on the taste of the water by itself. If you were using water just to dilute a cup of espresso (e.g., when making an americano), then your only worry would be that the water tastes good.

The key difference comes when you use water to *extract* coffee from the ground beans. In that situation, you want to have some potent mineral ions like magnesium (Mg^{+2}) and calcium (Ca^{+2}) that can travel inside the bean’s cellulose walls and come back with all the compounds that give the great taste to a cup of coffee. According to the Specialty Coffee Association (SCA), sodium (Na^{+}) also plays a role, but a somewhat less important one. If you are wondering whether this is also true about tea – yes it is. If you live in Montreal, you might have noticed that you are unable to brew tea as good as the one you can drink at *Camellia Sinensis*, and your tap water is one main reason (they use mineralized water at Camellia Sinensis).

In this post, I’d like to discuss extraction water a bit more, and give some practical tools for everyone to improve their brew water without necessarily needing fancy equipment. Let’s start by listing some of the SCA recommendations for brew water (I ordered them in my perceived order of importance):

- No chlorine or bad smell
- Clear color
- Total alkalinity at or near 40 ppm as CaCO
_{3} - Calcium at 68 ppm as CaCO
_{3}, or between 17–85 ppm as CaCO_{3} - pH near 7, or between 6.5–7.5
- Sodium at or near 10 mg/L
- Total Dissolved Solids (TDS) at 150 mg/L, or between 75–250 mg/L

The first two are more widely known, but it’s always good to keep in mind if you start creating your own mineral recipes (more on that later). If your resulting water is milky or has visible precipitation of minerals, it’s not good ! If this happens, you probably added way too much minerals for some reason. You can also easily get rid of chlorine by letting water sit on the counter for an hour or so.

Total alkalinity is often confounded with pH, but it’s not the same thing. pH measures the (logarithm) ratio of free OH^{–} ions to H^{+} ions in a solution, with pH = 7 corresponding to a unit ratio (neutral). A larger amount of H^{+} ions produces a more acidic solution, with a lower pH, and a larger amount of OH^{–} ions produces a more *alkaline* solution, with a higher pH. This is why *total alkalinity* is often confused with an *alkaline solution*, which is kind of understandable given this poor choice of terms.

Total alkalinity typically measures the amount of HCO_{3}^{–} ions, which are able to capture any free H^{+} ions that are added to the solution, and prevent them from making the solution more acidic by forming carbonic acid:

For this reason, HCO_{3}^{–} is termed an *alkaline buffer* in this context. A high total alkalinity will therefore make a solution more stable against pH changes. This bears some importance in coffee making, but there is a big problem with having a total alkalinity that is too high; it can react with the aromatic acids that were extracted from the coffee beans, and mask some of these important flavors. This is why the SCA recommends a very narrow range in total alkalinity near 40 *ppm as CaCO** _{3}*.

You may sometimes hear total alkalinity referred to as carbonate hardness. It’s a slightly different concept, but for coffee extraction water it’s almost always equal to total alkalinity (technically, this is true when the *total hardness* of water is higher than its *total alkalinity*).

At this point you may thinking “*what the hell is this unit of measurement involving this random molecule CaCO*_{3}* ?”*. Turns out scientists love to create large collections of weird measurement units, and this is yet another example of that (like measuring the energy of stars in *ergs*…). These *ppm as CaCO** _{3}* basically ask

The next recommendation is to have calcium hardness between 17–85 ppm as CaCO_{3}, with the units again relating to the same chemical reaction above. Magnesium is also widely used in the specialty coffee association, and is believed to extract slightly different flavors, but to my knowledge there are not yet any lab tests to back this up (there might be some blind testing backing it up, but I’m not aware of them). As a consequence, most people use a mix of magnesium and calcium as the extracting agents. I already explained the logic behind this recommendation above; you basically just want enough of these cations to do the extraction job properly, but not too much as to completely throw off balance the flavor of the coffee or to cause massive corrosion or scaling in your equipment.

Both of the magnesium and calcium cations are related to the *total hardness* of a solution, defined as the summed concentration of many cations (positively charged ions), among them calcium, magnesium, iron, strontium and barium. In coffee extraction applications, only magnesium and calcium are typically present, so total hardness is just taken as their sum. A more widely used recommendation would therefore be to keep *total hardness* in the SCA range, rather than just calcium hardness.

The next two recommendations are often not focused on too much in the specialty coffee community. I often see water recipes with pH in the range 8.0–8.2 (slightly alkaline), and the resulting coffee tasted great. I haven’t done extensive tests comparing pH~7 water to these recipes, as it’s typically hard to play with pH without affecting the other variables above. I also have not experimented much with the effect of sodium, so that could be the subject of a future blog post; for now, I just try to follow the SCA recommendation, but I don’t put too much focus on it.

A lot of people use tap water through a Brita to brew coffee. This is not bad in principle, but all such a carbon filter does is remove chlorine and other undesirable components, and soften the water (it decreases total hardness and total alkalinity). If this lands you in a good zone for brewing, that’s great, but it is rarely the case for typical tap water.

At this point, it would be useful to visualize the water properties of different cities, bottled waters and some recipes of coffee professionals:

In the figure above, you can see the range recommended by the SCA (green bar), the region recommended by the Colonna-Dashwood & Hendon (2015) Water for Coffee book (this mythical book is now pretty much impossible to find, but it is said by the ancient ones to go much deeper in the chemistry of coffee extraction than what I could ever write in this blog post), and the more constrained region recommended by the Specialty Coffee Association of Europe (SCAE), which is mainly based on avoiding regions of significant scaling (upper right) or corrosion (upper left), two aspects that are mostly important to the delicate internal parts of espresso machines. The Third Wave Water (TWW) classic and espresso profiles are little bags of pre-weighted minerals that you can dump in a gallon of distilled water to get easy water for coffee brewing.

The dashed line on the figure corresponds to a 1:1 total alkalinity and total hardness. Most naturally occurring water will fall near this line because of how water acquires its minerals by dissolving limestone. The widely used process of water softening by de-carbonization also moves the composition along this region (toward the origin of the figure). This is why a lot of city tap waters (triangles) and bottled waters (stars) fall along that line. I can’t believe that I lived for 3 years in Washington D.C. without ever knowing about any of this (and I Brita’d my water out of this great spot like a fool). You would be surprised how many of the city or bottled waters that fall completely outside of the range of this figure.

All other circles on the figure correspond to mineral recipes used or recommended by different professionals (e.g., the Leeb & Rogalla book, Scott Rao, Matt Perger, Dan Eils, the World of Coffee Budapest championship, the 2013 Melbourne World Barista Championship, and several recipes from Barista Hustle), the stars correspond to bottled waters, and the triangles correspond to different cities.

Now that we talked about the theory behind extraction water, we should focus on practical applications. You would be surprised how many specialty coffee shops have very expensive water filtration systems based on reverse-osmosis to rid the water of all its contents, and re-mineralization resins to achieve something close to these recommendations (try asking your favorite coffee shop).

At home however, none of this is really practical, as these devices typically cost several thousands of dollars, and still require you to monitor your tap water and adjust their setting from time to time. Unless you have the incredible luck of living somewhere with great brew water (the only example I know is Washington DC, at least in 2018), you have these types of choices (ordered by increasing effort required):

- Get a magnesium re-mineralizing water pitcher (e.g., the
*BWT*). - Order some third wave water minerals and dissolve them in a gallon of distilled water.
- Mix a pre-determined combination of bottled water brands.
- Buy distilled water and re-mineralize it yourself. This requires a bit more work but gives you incredible flexibility.

The first option has the merit of being simple, but you have almost no control over the final result. A BWT pitcher will soften your water and then add in some magnesium, which will move you toward (0,0) and then upward in the figure above. I don’t know to what extend it moves the composition around, so ideally you’ll want to test the result with some aquarium water hardness and alkalinity kits. I suspect the result would be decent in cities with similar compositions to Montreal.

I found that third wave water (the “classical profile”) produces a really good result for very little effort. You do have to buy a gallon of distilled water, which is a bit of effort, but they are extremely cheap and will last for a dozen cups of coffee. The “espresso profile” of third wave water is useful if you are worried about scaling and corrosion in your espresso machine, so I recommend only using it for espresso, not for filter coffee. I compared a Colombian coffee (the *Ignacio Quintero* from Café Saint-Henri) extracted with third wave water, the Rao/Perger and Dan Eils water recipes (discussed more below) by blind tasting, and I found the third wave water to be a bit overwhelming in term of resulting acidity.

My guess is that this is due to third wave water being much higher than the other recipes in terms of total water hardness. I preferred the Rao/Perger recipe, but in all honesty all three cups were very good, and way better than what you get with Montreal tap water. I think third wave water is also a good option for traveling, as it comes in a little sealed package with the composition marked on it, so that might not cause problems at TSA (although I have not tested this yet). You would still need to buy a gallon of distilled water though, so depending on the nature of your trip this could be a non-ideal solution.

I must confess, I am not sure I placed the Third Wave Water points on the right position of the “total alkalinity” axis. This is because they use a less usual component called “calcium citrate”, or Ca_{3}(C_{6}H_{5}O_{7})_{2} in their mix of minerals. Once dissolved in water, each of these molecules will liberate three Ca^{+2} cations and two C_{6}H_{5}O_{7}^{-3} citrate anions (negatively charged ions). I treated each of these citrate anions as an alkaline buffer that can capture three H^{+} cations each, and assumed that they are stable enough as citrate acid (C_{6}H_{8}O_{7}) to prevent a significant pH change. This is a lot of assumptions, and I also needed to assume that citrate acid is as efficient at actually capturing the H^{+} cations as are the HCO_{3}^{–} anions. Once I made these assumptions, I just calculated what amount “ppm as CaCO_{3}” of HCO_{3}^{–} would have the ability to capture the same amount of H^{+} cations. It is quite interesting that the classic profile falls quite close to other brew water recipes in total alkalinity when making all these assumptions.

[Update, January 3 2019: I have now tested the total alkalinity of *Third Wave Water* (classic profile) with a Hanna Instruments photometer, and obtained a measurement of 43 +/- 5 ppm as CaCO_{3} total alkalinity; this is very close to the ~ 50 ppm as CaCO_{3} that I had predicted ! It could be slightly lower because citrate anions may be slightly slower or worse at capturing H^{+} cations, but this is almost within the measurement error so I would not deduce too much from this measurement alone. The main point is: citrate anions *do* act as an alkaline buffer, and third wave water is exactly at the SCA-recommended value for total alkalinity !]

Mixing water bottles or distilled water is another viable option. If you use a combination of two bottled waters, you can imagine a line drawn between the two stars that correspond to each of the bottled water properties in the figure above, and different mixing ratios will place you at different spots along that line. Using three bottled waters instead of two will allow you to move on a triangle-shaped surface that connects the three bottles in the chart. A problem with a lot of bottled waters is that they are not far above the 1:1 total alkalinity vs total hardness line (the dashed line in the chart), making it harder to fall anywhere in the Colonna-Dashwood & Hendon (2015) region. The lack of bottled waters high in total hardness and low in total alkalinity limits the use of three-bottled combinations.

From the little data gathering I have done yet, I found that using a water really high in both total alkalinity and total hardness (like *Montclair* water) mixed with much softer water is a good way to go. Here are a three bottled water recipes that seem to work great (with their designated letter on the next figure):

Right now, the best 2-bottled combination I could find is 10 parts *Smart Water* to 1.6 parts *Montclair*. This will place you at a total alkalinity of 40 ppm as CaCO_{3}, and a total hardness of 69 ppm as CaCO_{3}, nicely split between calcium (17 mg/L) and magnesium (6 mg/L). It will even include 5 mg/L of sodium, falling a bit short but not that far from the SCA recommendation.

Another great option is to mix 10 parts distilled water with 2.05 parts *Montclair* water. This is very similar to the last recipe, but slightly softer (67 ppm as CaCO_{3}), and with a bit more sodium (9 mg/L), extremely close to the SCA recommendation in sodium.

If you can’t get your hands on *Montclair* water, try this one: 10 parts *Smart Water* with 1.6 parts *Compliments*. This will get you something a bit softer in total hardness (57 ppm as CaCO_{3}), still with a mix of calcium (14 mg/L) and magnesium (5 mg/L), but without sodium.

I have not tried tastings with these bottled water recipes yet; this was determined just from calculations. Let me know if you try them before I do !

If you would like to experiment with some more mixes of bottled water, I created a Google Sheet here, which I will keep updating in the future. You can do File/“Make a Copy”, and then you’ll be able to add in some more bottled water and create new recipes. You can also find many more mixed bottle water recipes that I fiddled with in there.

Another viable option may be to mix your tap water with distilled water, but this will only allow you to move along a line connecting (0,0) to your city in the first figure, and you would ideally need to monitor seasonal variations in your tap water hardness and alkalinity. I added a few tap water compositions (Montreal, Laval and Washington DC) in the bottled water spreadsheet.

If you want to take things to the next level, you can get yourself some minerals, a scale precise at 0.1 g or better (mg-precision scales are not too expensive; I use this one and I really like the small plastic dishes that come with it), some mason jars, and a pipette or a small kitchen plastic spoon. There are a total of five minerals you will need if you want to do all of the recipes below, but the simpler ones can be done with just the first two in this list. For the less common items, below I will give you some Amazon links that I used to buy them.

Please make sure you always buy food-grade ingredients, not the pharmacy-grade or lab-grade ones. The latter two may be *more* pure than food grade is, but the rare impurity could be much worse for your health (e.g., heavy metals). Barista Hustle mention that pharmacy-grade epsom salt is probably ok to consume at these low concentrations, but I consider the key word here to be “probably”, especially if you’re going to drink this every morning. Once you opened a bag of minerals, always keep them in a cool, dry place in a hermetic jar, especially those in anhydrous form.

Notice that epsom salt is not simply MgSO_{4}, but rather its heptahydrate form MgSO_{4}•7H_{2}O, which makes it look like a clear crystal. MgCl_{2} and CaCl_{2} can be found both as hydrates or anhydrous (no water) forms. Some vendors don’t specify what form they are providing, which can be annoying, but in general if you have little white spheres of CaCl_{2} they are probably anhydrous, and if you have milky clear crystals of MgCl_{2} they are probably of the hexahydrate from (see pictures below). It’s ok if you don’t get the exact hydrate form, but you’ll need to adjust the weights to get the same amount of Ca^{+2} or Mg^{+2} cations.

After doing some research on the web, I could get my hands on a dozen mineral water recipes. I have not tried them all yet, but I will comment those that I did try. I modified all recipes below to make them more uniform. In all cases, you’ll need to put the specified weights of minerals in a jar that can hold 200 mL of water (ideally slightly more). A glass jar such as a regular mason jar is good for this, and I would avoid metallic containers because of potential corrosion.

Once you put the required minerals in the jar, add in some distilled water until you hit a total weight of 200 g. This will be your concentrate; a solution often white that will initially degas some CO_{2} and will easily precipitate solid minerals. I recommend keeping such a concentrate in a cool dark place for a few hours with the mason jar lid just slightly screwed, to allow for the outgassing to complete. You might even stir it up a few times to help things get going. You will also get a much faster reaction and outgassing if you use warm or hot distilled water, but I am *not* sure if this affects the resulting composition (I don’t think it does, and my first trial with the Rao/Perger recipe and hot distilled water turned out great).

Your concentrate will be good for 50 gallons of water. This is a lot of water. Think of it like this: you can fill a very big bath with amazing coffee with that much water. In other words, I highly recommend (1) not going crazy and starting up 8 different 200 mL concentrates when you first read this, and (2) keep them tightly closed in the fridge after they degassed. If you want to compare several water recipes, you can create downsized versions of the concentrates without problem (use a rule of three to downsize both the concentrate volume and mineral weights by the same factor).

Once you have a concentrate, I recommend putting it on your scale, taring the scale, and using the pipette or small plastic spoon to scoop out 16 g and put it in a 4 L of distilled water (or 4 grams per liter). Congratulations, this is your mighty brew water. Make sure you keep it in the fridge, especially when it is almost empty, and always smell it before using it. As I mentioned in my last post, if it smells like an old rag, so will your coffee. In my experience, a gallon of distilled water will turn bad after approximately a week out of the fridge, or a month in the fridge. This is a *much* slower staling process than what you would get with tap water, as distilled water starts out free of any bacteria. I also don’t really recommend letting the water sit in your boiler for more than a few hours, but this is definitely less an issue when you started with distilled water as a base.

Now, here are the recipes !

- 5 g epsom salt (MgSO
_{4}•7H_{2}O) - 2 g MgCl
_{2}•6H_{2}O (hexahydrate)1 g anhydrous MgCl*or*_{2} - 1.5 g anhydrous CaCl
_{2}2 g CaCl*or*_{2}•2H_{2}O (dihydrate) - 1.7 g baking soda (NaHCO
_{3}) - 2 g bicarbonate potassium (KHCO
_{3})

- 5 g epsom salt (MgSO

**Reference:** Scott Rao

**Comments:** So far this is my favorite recipe from blind testing.

It produces a bright and well-balanced cup.

- 5 g MgCl
_{2}•6H_{2}O (hexahydrate)2.3 g anhydrous MgCl*or*_{2} - 3.8 g anhydrous CaCl
_{2}5 g CaCl*or*_{2}•2H_{2}O (dihydrate) - 5 g bicarbonate potassium (KHCO
_{3})

- 5 g MgCl

**Reference:** Scott Rao’s Instagram post

**Comments:** This is a great and simple recipe.

So far, my 2nd best favorite from blind testing.

- 10 g epsom salt (MgSO
_{4}•7H_{2}O) - 3.4 g baking soda (NaHCO
_{3})

- 10 g epsom salt (MgSO

**Reference:** This website.

**Comments:** I have not tried this one yet.

- 4 g MgCl
_{2}•6H_{2}O (hexahydrate)1.9 g anhydrous MgCl*or*_{2} - 3 g anhydrous CaCl
_{2}4 g CaCl*or*_{2}•2H_{2}O (dihydrate) - 3.4 g baking soda (NaHCO
_{3})

- 4 g MgCl

**Reference:** I deduced this one from other recipes above.

**Comments:** I have not tried this one yet.

- 2.9 g epsom salt (MgSO
_{4}•7H_{2}O) - 1.0 g baking soda (NaHCO
_{3})

- 2.9 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 6.2 g epsom salt (MgSO
_{4}•7H_{2}O) - 3.4 g baking soda (NaHCO
_{3})

- 6.2 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 8.4 g epsom salt (MgSO
_{4}•7H_{2}O) - 3.4 g baking soda (NaHCO
_{3})

- 8.4 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 9.8 g epsom salt (MgSO
_{4}•7H_{2}O) - 3.4 g baking soda (NaHCO
_{3})

- 9.8 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 9.2 g epsom salt (MgSO
_{4}•7H_{2}O) - 4.2 g baking soda (NaHCO
_{3})

- 9.2 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 12.2 g epsom salt (MgSO
_{4}•7H_{2}O) - 2.6 g baking soda (NaHCO
_{3})

- 12.2 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 15.4 g epsom salt (MgSO
_{4}•7H_{2}O) - 2.9 g baking soda (NaHCO
_{3})

- 15.4 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

- 21.5 g epsom salt (MgSO
_{4}•7H_{2}O) - 3.8 g baking soda (NaHCO
_{3})

- 21.5 g epsom salt (MgSO

**Reference:** The Barista Hustle simple DIY recipes.

**Comments:** I have not tried this one yet.

I also collated all of these recipes in another Google sheet, which you can also play with if you do File/“Make a Copy”. That one will estimate the resulting total hardness and alkalinity from the input recipes, as well as other detailed quantities. You can also use the Aqion website to get the same outputs for the simpler recipes (maximum 3 minerals, and the calcium citrate present in *Third Wave Water* cannot be included). A nice aspect of the Aqion website is that it also gives you the electric conductivity (EC), in the units of μS/cm (microSievens per centimeter) often measured by cheap TDS-meters (TDS is for *total dissolved solids*). This is a great way to double-check that you didn’t mess up your brew water, but always make sure you measure it at 25°C. Even when TDS-meters say they do a temperature correction, it’s a bad one. I would also not trust the TDS reading itself, because these instruments make important assumptions on the actual composition of your water to translate the EC to a TDS.

Happy brewing ! In BOTH senses

Special thanks to Alex Levitt for proofreading.

- Scott Rao’s blog and Instagram (both contain a wealth of information).
- The Barista Hustle website.
- The Specialty Coffee Association
- Special thanks to Scott Rao, Charles Nick and Victor Malherbe.

In my previous post, I indicated that one of the steps in preparing the coffee bed when brewing coffee with a V60 is to dig a small trench with a finger so as to quickly wet the coffee bed more uniformly at the bloom phase. I mentioned that I had not seen a convincing demonstration that this actually helped, but because it’s easy and makes sense, I ended up adopting the practice.

Well, thanks to Barista Hustle, now the tests have been done, and it turns out it does help. They even found a better way to do it, which they describe as preparing the coffee bed into a “nest” shape, in other words into quite a deeper and larger trench.

They achieved this roughly with a spoon, but I find it easier by sticking a chopstick in the center of the coffee bed through the bottom of the filter (taking care not to pierce it), and rotating it around in circles that slowly increase in radius. I posted a video of this method below.

[Edit January 11, 2019: Scott Rao pointed out to me that the chopstick method I present here could potentially be compressing some coffee, which could lead to more channeling. I think this is a valid worry. However, the alternative (used by Perger and Rao) is to dig the nest with your fingers – I remain agnostic as to whether one compresses the coffee bed more than the other, but I find it harder to replicate the same nest shape every time using my fingers. Please keep this in mind if you use the chopstick method, and use a chopstick that has a pointy end if you can (make sure you don’t poke a hole in your paper filter). I will explore this more in a future blog post.]

Here you can find a PDF with an updated version of my first blog post, that includes this “nest” technique.

*References*