What is Astringency ?

Reminder: as an Amazon Associate I earn small commissions from qualifying purchases made through some of the links below.

If you have been reading about coffee extraction lately, you might be familiar with the term astringencywhich is often used to describe poorly extracted coffee. This is a descriptive for which the meaning is generally not well known, and it’s worth pondering why we dislike it so much in the specialty coffee world, contrary to other communities like wine and beer.

First, let me try to describe what astringency feels like. Astringency creates a dryness sensation in your mouth, and generally mutes a lot of other flavors, especially when it’s strong. It can often be found in fruits or vegetables, especially unripe ones − in fact, the astringency that is present in wine comes from the grape’s seeds and skin (Mattivi et al. 2009). If you ever ate an unripe banana and could not resist from grinning as the dryness sensation invaded your whole face, then you might know what I’m talking about. The one thing I recall being the most astringent I have experienced was the spongy white pulp separating seeds inside of a pomegranate. Another good example of astringency is over-extracted green tea, but it also contains other strong flavors so it’s not the most precise example.

Astringency is not seen as a necessarily bad thing by wine experts, and I suspect that this is due at least in part to red wine being much more concentrated than filter coffee; typically wine has total dissolved solids concentrations between 1.7% and 3.0% compared to 1.3% to 1.5% for filter coffee (e.g. see Schopfer & Lipka, 1973 and this French post), so astringency won’t easily mask everything else, and can instead be in balance with the global taste profile. In tea and coffee, we try to get rid of astringency as much as we can because it can easily become the dominant sensation, and can even come close to being the only perceptible one.

The astringency sensation is caused by soluble organic compounds that belong to a class called polyphenols, which includes tannins but also other complex molecules. They are often produced by plants as a defense mechanism against insects and other predators. These complex and large molecules can attach to proteins like Lego blocks, and form large clusters that can precipitate out of solution. This is what happens when we feel astringency; the polyphenols bind to proteins in our saliva and precipitate, forming clumps that inhibit our taste buds’ ability to taste the coffee properly, and cause this rugged and dry sensation.

Fortunately for us, polyphenols are mostly very large molecules, which makes them heavier and harder to extract from coffee particles, compared to smaller molecules like caffeine and the various acids we enjoy. This makes it possible to extract the good molecules, while avoiding the polyphenols that cause this astringent sensation. If you prepare coffee by immersion, you likely won’t often encounter astringency, because the immersion method extracts coffee solubles more gently, especially if you grind coarse and don’t agitate too much. Furthermore, the extraction happens in a relatively uniform way, in other words you won’t have large amounts of fresh water extracting a small amount of coffee particles. Remember that fresh water is a much more potent solvent compared to concentrated water. However, immersion brews also never achieve a very good filtration of coffee fines and other insoluble compounds.

Above: ellagitannin, an example tannin molecule from raspberries (source: Wikimedia Commons).
Above: malic acid (top) and caffeine (bottom), two examples of smaller, more easily extracted chemical compounds found in coffee (sources: Wikimedia Commonsand PubChem).

If you happen to prefer the taste of coffee without these insoluble particles as I do, you might prefer coffee prepared by the percolation method, where fresh water is poured on a bed of coffee, filtering out any undissolved solids (see this previous post for a more detailed discussion of the differences between percolation and immersion). Percolation brews are however much more prone to causing astringency, because channels can form where a larger fraction of water passes through preferential paths, which can over-extract some small regions of the coffee bed (we sometimes call this local over-extraction). This allows heavier polyphenols to be extracted, and makes the resulting brew astringent.

One thing I do not know is whether polyphenols can be filtered out by the coffee bed itself. When preparing percolation brews, the coffee bed filters out a lot of undissolved solids, and prevents them from getting in your brew. This is why a coffee bed full of fines will only clog the paper filter if you agitate it a lot; if you don’t agitate it, then the coffee bed acts as a filter and retains these fines, preventing your paper filter from clogging. This is also why a V60 has much less undissolved solids than a typical Aeropress brew; the depth of the coffee bed in the latter is typically much smaller, so it lets more fines in your cup.

Based on experience, I know that a typical V60 coffee bed can filter out a lot more compounds than just a paper filter, even in the extreme case of Whatman Grade 5 paper filters that have average pore sizes of 2.5 micron; I’ll talk more about this in a future post. However, I’m not sure if coffee beds are such good filters that they would remove polyphenols that were extracted and dissolved in the slurry. I think it is unlikely, because although polyphenols can be much larger than other coffee compounds, they are still much smaller than a micron. For example, some wine tannins have sizes in the range 50−70 Å (McRae et al., 2014). If you are not familiar with these units, 1 micron is equivalent to 10,000 Angstrom (Å), so a coffee bed would need to be a much better filter filter than a Whatman Grade 5 paper filter to remove polyphenols.

Ifthe coffee bed is able to filter out polyphenols however, the presence of a large channel could provide an additional reason why they let polyphenols in our beverage, because they would create localized regions of bad filtration, where polyphenols and undissolved solids pass through. That is an interesting question to me, because it would mean that over-extraction could potentially be fixed by a good non-channeling coffee bed, even if polyphenols are getting extracted in the slurry for other reasons.

This whole idea of channels causing localized poor filtration made me want to directly measure the amount of fines and other undissolved solids in my coffee brews in an objective way. This can be done with the help of turbidity meters, but until about a month ago, I though those were extremely expensive equipment only built for labs. When I saw Ray Murakawa using what seemed like a portable turbidity meter on Instagram, I got very excited and he told me they are actually portable and affordable ! Thanks to the help of my Patreon supporters, I promptly ordered one and started measuring some of my brews. Turbidity meters work through a different principle than refractometers, but they give us information that is a bit similar. Refractometers inform us about the concentration of dissolved solids, and turbidity meters inform us about the concentration of undissolved solids.

Above: a turbidity meter with its calibration solutions and a coffee sample.

When taking turbidity measurement, I realized something really interesting; the cloudiness goes up pretty fast as a brew cools down and stales. This was not too surprising at first because cooler water is less good at dissolving things, so we can expect the total amount of undissolved solids to increase as the brew cools. What I found really surprising is that even if you quickly cool down filter coffee to room temperature, its cloudiness keeps going up quite quickly for about half an hour, and then very slowly for more than 12 hours (I haven’t tried measuring older coffee). I’m not sure about this, but I suspect this increasing cloudiness at room temperature might be caused by polyphenols binding to some proteins that were also extracted from the coffee. Undoubtedly there are many other things that affect the absolute cloudiness of a coffee beverage, especially in an immersion, but it’s possible that the rate of increase at room temperature correlates with astringency. Furthermore, given that most of this turbidity increase happens within half an hour or so, I now wonder whether it’s related at all with coffee tasting bad after a brew stales.

Reading about polyphenols also made me realize there may be a reason beyond channeling why filter brew methods with very finely ground coffee (i.e., finer than espresso) often come out tasting astringent. For example, the high-extraction siphon method I posted a while ago only worked well with some specific roasts (some of which I listed in the post), and others came out very astringent regardless of whether channeling seemed to occur or not. Other examples include a few finely ground Aeropress and Buchner siphon brews I made that came out very astringent. I think some coffees may simply naturally contain a much lower amount of polyphenols, whether it is because of their varietal, terroir, processing or roasting. Grinding so fine ends up breaking most coffee cells, and therefore the chemical compounds are washed out by water (a process sometimes called erosion) rather than having to diffuse through the small pores in the cellulose walls of coffee cells. In that situation, polyphenols may very easily end up in the slurry, and if the coffee bed can’t filter them out, they may end up in the final brew even in the absence of channeling.

You may think that the same should happen with espresso and Turkish coffee, and you’re probably right. Assuming I’m not entirely mistaken about polyphenols extracting easily from broken coffee cells even in the absence of channels, I think either the high concentration or other things present in these types of brews (oils, suspended solids, etc.), may balance out the presence of polyphenols and make them less overwhelmingly astringent. That might also explain why it’s hard to take an evenly, highly extracted shot of espresso and dilute it into an amazing filter brew.

I think that one promising avenue may be to precipitate the polyphenols post-brewing by adding proteins in the finished coffee beverage, much like is routinely done in wine or beer making. Some things that are used for precipitation include egg whites, gum arabic, silica gels and a product called Polyclar (e.g., see this article about beer filtration) − these things are all rich in proteins. One potential major problem is that those compounds are typically left in the beer or wine for several hours to allow the precipitation to happen; if we wait this long with brewed coffee, it will probably taste very bad even if we remove all polyphenols from it.

I know this post probably opens up more questions than it answers, but I’m hoping it will help us think more clearly about what makes a brew taste astringent and how to avoid it !

I’d like to thank Scott Rao and Sylvain Mussigmann for useful comments.

What Affects Brew Time

[Edit October 23, 2019: Kevin Moroney pointed out to me that the slurry getting concentrated as it passes through the coffee bed also makes it more viscous as it reaches the bottom layers in a V60. This is indeed a very valid point, so I added a paragraph about this in the viscosity discussion below. In practice, this effect depends on the more direct variables of (1) brew recipe, (2) dripper geometry, and (3) the type of coffee, so it doesn’t change the final list of directly controllable variables that affect brew time. I still thought it is valuable to discuss it. ]

If you are used to pulling shots of espresso, measuring shot time might be a tool you use often to determine whether your grind size was dialed in appropriately for that coffee and set-up. This may lead you to believe that total brew time is also a very useful concept in the context of pour over filter coffee, for example to communicate your preferred grind size.

I think that this is really not the case, and I’d like to lay down the reasons why. I’m even slightly skeptical that shot time in the context of espresso is that useful especially when communicated online to different baristas that live in different conditions, but I don’t have any strong opinion about espresso making, because my lifetime cumulated number of shots pulled is currently a grand total of 1. So, at least for now, let’s focus on pour over coffee, as I usually do.

The reason why I think brew time is not that useful is simple: I think there are way too many variables that affect it, several of which are almost never measured, and some of which would be hard to always measure accurately. I’m personally striving at eventually measuring all of them such as to make my brews as repeatable as I can, but I’m not even sure yet whether that’s a realistic goal at all.

If we want to understand what affects drawdown time, it’s very useful to turn to Darcy’s law; this is an empirical equation that describes how a liquid percolates through a porous solid medium. In other words, it was originally deduced directly from observations and experiments rather than from fundamental concepts of physics. We now know enough about the physics of hydrodynamics that it can be derived from more fundamental principles, but it is only valid under some specific conditions which are almost always respected in coffee brewing, and more generally in daily life.

For the more mathematically inclined, let’s have a look at Darcy’s law applied to a liquid flowing down through a cylindrical medium, and then I’ll explain it with words:

In the equation above, Q represents the discharge, which is the volume of water coming out from the percolation medium in units of volume per time (e.g., mL/s). k is the permeability of the porous medium, which can also be though of as the inverse of its resistance. A more permeable medium will let more water pass through in a fixed amount of time; the physical unit of k is a surface (e.g., m²). A is just the surface area of the medium (remember we are applying Darcy’s law in a cylindrical geometry, so A is the same at all depths). μ is the dynamic viscosity of the liquid; it is low for most of the pure liquids we encounter in daily life such as distilled water and alcohol, but can get very large for more complex or heterogeneous stuff like honey or olive oil.

Dynamic viscosity is sometimes also called absolute viscosity, and it represents how much a liquid resists to deformations. The more viscous a liquid is, the harder it will be to pass it through small pores. L is the total length of the percolation medium, which in our case usually corresponds to the depth of a coffee bed. ρ is just the mass density of the liquid (for example in kg/m³), g is how fast objects accelerate when falling freely at the surface of the Earth (approx. 9.8 m/s²) and h is the total height of the column of liquid that is percolating (also counting the liquid above the surface of the solid medium). Δp refers to the difference in pressure below versus above the percolation medium (in physics and maths, we use this weird upward triangle to represent a variation or a change). In the context of espresso, this would be the atmospheric pressure (below the puck) minus the pressure a machine applies on top of the coffee bed. There’s a minus sign here because of how we defined the pressure differential, such that a downward pressure results in a positive flow of liquid.

Those of you who have worked with Darcy’s law may not have encountered it in the form above: it is often shown in a simpler form where the ρgh term is ignored, because it is often applied in a context where the pressure applied on the fluid is much more important than the fluid’s weight (as is the case with espresso). But for pour over coffee, we are in a context of gravity-driven flow, and therefore this more general form of Darcy’s law is useful.

Now that we defined all the terms in Darcy’s law, let’s explain it with words in the context of coffee. Basically, it says that any of these changes will make water flow come out from under the at a faster rate:

  • A more permeable coffee bed;
  • A wider coffee bed;
  • A shallower coffee bed;
  • Water that is less viscous;
  • Water that is denser;
  • Brewing from the surface of a denser planet;
  • Applying pressure on top of the coffee bed.

These changes are combined and independent of each other, and they are also linear, which means that doubling any of the things mentioned above will double the flow, for example. The geometry of the brewer won’t change all of these relations, and will only add a constant of proportionality (i.e., a number) in front of the right-hand side of the equation.

As you can imagine, the faster water flows through the coffee bed, the shorter your brew time will be. Therefore, we can look at all of these terms in Darcy’s law as potential variables that will affect brew time. You can already start seeing that there are quite a few of them, but it’s even worse than that; some of the terms above hide more than one variables that are combined together. The most dramatic one is permeability; in the context of pour over, it is affected by the following variables:

  • The grind size (coarser coffee is more permeable, finer coffee is more resistive);
  • The permeability of the coffee filter (affected by its pores and thickness);
  • The ridges on the inside of the brewer’s wall and filter creping (they allow air to flow upward outside the filter and increase permeability);
  • The saturation of the coffee bed (a coffee bed saturated with water increases its permeability, which is probably the most important reason why we bloom).

If you though this was starting to look like a rabbit hole, brace yourselves, because grind size also hides several other variables:

  • The grind size that you set your grinder to (which will differ even between two units of the same grinder model);
  • The grinder rotation rate (a faster rotation will generally produce finer grounds);
  • The grinder design;
  • The grinder burr size, geometry, material and alignment;
  • The bean temperature when you grind them (here’s a paper about that and another interesting discussion, but I want to discuss this more in the future). 
  • The bean terroir, varietal, processing, roast development, and aging − all of these variables affect the bean hardness and density, which will make it shatter less or more during grinding. I will talk more about this in a future post, but if a coffee shatters more, it will generate more fines and result in a less permeable coffee bed. The exact defects and variations in green coffee bean ripeness, humidity will also likely have an effect on roast and shattering.

The width and depth of the coffee bed can be expressed as being dependent on more intuitive and practical variables:

  • The dripper geometry;
  • The dose of coffee (in grams);
  • The mass density of coffee (less dense coffee will result in a deeper bed for the same dose).

Ah, finally… we listed all the variables.

Nope ! We are far from being done. You might think that the viscosity and density of water are known, fixed quantities, but they are not: they depend on its temperature ! The effect of changing water density is very small in the context of coffee brewing, as this data illustrates:

At sea level atmospheric pressure, the difference of room temperature vs boiling water density is just about 4%. The change in viscosity, however, is not small. I talked about this a little bit in an earlier Instagram post, where I built this graph of water viscosity (in red) from literature data (specifically, IAPWS 2008 and Engineers Edge Machinery’s Handbook):

The difference between room and boiling water viscosity is therefore about 70% ! In the figure above, I also marked some typical slurry temperatures I obtained with a glass or plastic V60, and how the flow is affected if everything else is kept constant (in blue).

As you can see, warmer water is significantly less viscous, and it will therefore flow faster through the coffee bed. And please do not go thinking I am talking about kettle temperature here ! I don’t only say this because kettle thermometer readings are not reliable (which they are not in my experience), but also because kettle temperature is only one of the variables that will affect the temperature of water as it percolates through the coffee (i.e., in the slurry); these additional variables will also significantly affect the slurry temperature:

  • The dripper material (i.e. both its thermal mass and conductivity);
  • The room temperature;
  • Any air flow in the room;
  • The temperature of your ground coffee;
  • The moment during the brew (temperature will typically fluctuate);
  • How many pours your recipe has (more pours tend to result in cooler slurries).

The viscosity of water is also affected by its hardness and total alkalinity (I talk about these concepts in detail here), but the effect is very small unless you have very unusual water. Let’s quantify that a bit more. According to this scientific publication, the viscosity of water does depend on its bicarbonate content:

To put this into a bit more context, the addition of Na2CO3 at a concentration of 1 mol/L would result in a total alkalinity of 2000 meq/L, or in units we are more familiar with, about 100,000 ppm as CaCO3. That much bicarbonate is needed to almost double the viscosity of water. Given that brew water recipes for coffee are almost mostly below 80 ppm as CaCO3, we can safely ignore the effect of total alkalinity on viscosity.

The viscosity of water is similarly affected by its general hardness, here’s an example of how it increases as Calcium Chloride is added to water:

Yet again, we are talking about a 10% concentration (by mass) for a doubling in water viscosity, which is insanely larger than typical water hardness we use for coffee: even achieving the “Hard AF” Barista Hustle hardness with calcium chloride would necessitate less than 0.03% concentration by mass. We can thus also safely ignore the effect of water’s total hardness on its viscosity, and only care about its temperature.

There is another thing that affects viscosity in the slurry; the concentration of coffee compounds being extracted from the coffee particles. In espresso brewing, the high concentration of the beverage can cause it to become 2 to 3 times as viscous as the input water (e.g., see Clarke & Vitzthum 2001). For filter coffee, we can expect the effect to be much smaller, about a 30% increase if we assume it is a linear function of concentration. This is still not negligible, and means that the viscosity of water near the bottom of the coffee bed will flow a bit slower because if this higher concentration, therefore making the global flow slightly slower than one would expect based on pure water. However, for a fixed brew method, dripper geometry, and coffee type, the profile of concentration versus depth and time should be the same every time the coffee is brewed, so this effect can be categorized under the umbrella of these three more direct variables.

We have still not unwrapped most of the variables between the big parentheses of Darcy’s law, and those are the ones that make pour over timing much nastier than espresso timing. In the case of espresso, the Δp term is much larger than the ρgh term, and this means that repeating the exact same pressure profile every time will ensure that the shot time only depends on the variables we already studied above.

As we won’t be brewing coffee on the surface of Mars (that would suck), there is only one other variable we haven’t considered, and it’s not a fun one: the height of water in your V60, or h, is what makes pour over timing much harder. This is true mostly because it depends on one input variable that we control and measure only rarely: the rate at which we pour water from the kettle. Someone that pours a lot of water very fast in a single pour will build a very tall column of water in the V60, and it will flow much faster, and finish brewing much before, a barista who pours slower or in several smaller pours. Similarly, the bloom length will obviously affect the total brew time because it’s a period where no water is being poured from the kettle. The geometry of the dripper also has an effect on the height of the water column, but that’s much easier to control or keep constant.

There are also circumstances during pour over brewing where Darcy’s law fails, although typically only momentarily. Darcy’s law is valid only for a fixed porous medium, and there are a few things that can change the structure of the coffee bed, which is our porous medium:

  • The preparation of the coffee bed (distribution, bloom, swirling after bloom, and tamping in the context of espresso);
  • The amount of agitation: water poured faster or from higher up will lift the upper parts of the coffee bed and temporarily make it shallower, increasing flow (using devices like the Gabi B or Melodrip will eliminate most or all agitation);
  • Channeling: the appearance of a large channel can increase the coffee bed’s permeability;
  • Erosion, also called fines migration: finer coffee particles being displaced to the bottom by water can decrease the permeability of the coffee bed. This can also cause the filter to clog, which will decrease the permeability even more.

Another possibly important factor that may affect brew time is how much coffee particles swell during the bloom phase. As coffee swells, it slightly closes the gaps between particles, effectively making the coffee bed less porous. I’m not sure what properties of coffee affect how much they swell, but it’s possible that beans of varying hardness or particle porosity may swell differently.

There’s also one final thing that can easily be forgotten: the exact way in which we choose to define the start and end of a brew is also a factor. For pour over coffee, an obvious choice of when the timer starts is when kettle water hits the dry grounds, but the moment where the brew ends is a bit less obvious. I personally choose to stop the timer when the level of brew water just passed the surface of the coffee bed and I can see ambient light first reflecting on the surface of the wet coffee; I mostly choose this moment because it’s easily repeatable.

I think we have now finally detailed all of the most important variables that affect brew time ! But hey, maybe I forgot about others. If you think about more, I’d love to hear about it, but please don’t send me suggestions about light speed travel and kiloGauss magnetic fields lol.

It would be useful to regroup all of the important variables that we discussed above, in terms of what we can control directly when brewing (i.e., not viscosity), so here’s an extensive list:

  • Grinder setting, rotation rate, model, zero point, burrs and alignment;
  • Coffee terroir, varietal, processing and other bean characteristics (defects, drying, ripeness etc.), exact roasting process and development and bean aging;
  • Dripper model (geometry, material, inner wall ridges);
  • Exact brew recipe (bloom length and efficiency, number of pours, pressure or suction devices, coffee dose, pressure profile in the context of espresso);
  • Brew temperature;
  • Kettle flow speed and height, or anything else that affects agitation;
  • Room, bean and grinder temperature;
  • Exact filter model (e.g., Hario tabbed and tabless are different);
  • Air currents in the room;
  • Coffee bed distribution and preparation (and tamping if applicable);
  • Channeling and erosion;
  • How the brew start and finish are defined.

Now, if you want to have a consistent brew time, you need to measure and control and fix all of the things above, which is no easy feat. If you want to use brew time to communicate grind size, not only the two persons talking need to have the same grinder, zero point, and burr alignment, they must also be drinking the same coffee, have the same exact dripper, filters, water temperature, recipe down to the pour rate, etc. You can see why I think brew time is not that useful for communication ! If you are insane like I strive to be and measure most of the things above, then and then only changes in brew time will inform you that something is going on.

I’d like to thank Kevin Moroney for useful comments.

Extraction Uniformity and Channeling

For a while now I’ve been trying to understand the details of channeling in pour over coffee, and I found it very difficult to find a convincing description of why channeling (and thus astringency) happens suddenly when we grind a bit too fine, even if the surface of the coffee bed looks flat at the end of a brew.

Yesterday I finally found a scientific paper about percolation in non-uniform porous media that I think may be the missing piece to how we think about channeling.

Before I get into it, I’d like to briefly try to explain why a Google search for percolation returns a lot of stuff not obviously related to water penetrating a porous medium. It happens that the maths which are useful to describe water traversing a porous medium are also very useful to describe many other systems in physics. This edifice of mathematics called “percolation theory” turns out to be extremely useful in describing large statistical systems like those often encountered in quantum physics, and therefore most of what you’ll tend to find online is specifically centered around quantum or particle physics rather than brewing coffee.

So, back to the scientific paper above – the authors used a computer simulation to model the details of how a fluid flows in a disordered set of obstacles, which is exactly what happens when we brew coffee. Water flows around our coffee particles, and because they have variable shapes and sizes, the voids between them (which we can loosey call “pores”) are also very disordered. Water will flow faster where the voids are larger, and slower where the voids are small.

This is a consequence of two things: the “no-slip boundary condition” that states the layer of water immediately touching a solid surface must have zero velocity; and the fact that water is viscous means that subsequent layers of water can’t easily have extremely large differences in velocity. The no-slip boundary condition is a consequence of the adhesion between water molecules and solids being larger than the cohesion of water molecules within themselves; it is true in most typical real-life conditions, and coffee brewing is one of those.

In other words, if you imagine a small “tube” of spacing between coffee particles with water flowing in it, the thin layer of water on the sides of the tube that touches a coffee particle is not moving, and the layer immediatey on top of it (toward the center of the tube) can only move slowly. The next layer of water on top of all that can move a bit faster than the last one, and this trend goes on until you reach the layer in the center of the tube. You can imagine that a wider tube will have a larger central flow, and therefore also a larger average flow.

Here’s what this looks like in a computer simulation:

In the figure above from Stanley et al. (2003), the white pixels are obstacles to the flow of water (much like coffee particles) and redder colors correspond to regions where water flows more rapidly. I rotated the figure to make it more similar to coffee percolation where water flows downward. The simulation above would correspond to a V60 that drips at an extremely slow rate of 5 mg/s.

You can see how the flow of water is not very uniform, and some clumps of particles tend to be isolated from most of the flow (in the blue regions). In the context of coffee brewing, these particles will get under extracted. But now let’s see what happens if we pump up the flow of water, by applying more pressure on it:

The figure above is also a simulation from Stanley et al. (2003) with a thousand times more overall flow. It would correspond to a V60 that drips at a slightly rapid flow rate of 5 g/s.

If you look carefully at the second image, you’ll notice that there are now much less clumps of particles that are isolated from the flow of water, which is now overall a bit more uniform than before (although it is still not perfectly uniform). The authors decided to characterize this global flow uniformity in an objective way – this is great for us, because it directly impacts the uniformity of extraction. To do this, they simply measured the standard deviation or water’s kinetic energy (its energy of motion) across the pixels in the simulations, and they called the inverse of that quantity π. Larger values of π mean that the flow is more uniform, and smaller values mean that it’s very non-uniform, or “localized” in only a few paths as they call it. A perfectly uniform flow would have π = 1 (this can’t happen even with perfectly uniform spherical particles, because water still has to get around them), and an extremely non-uniform flow would have π close to zero. The authors parametrized the flow velocity in terms of the “Reynolds number” (Re), which we don’t need to get into here; we just need to realize that a higher Reynolds number corresponds to a faster overall flow.

As you can see, very slow dripping rates correspond to a “flat” regime with very poor uniformity that doesn’t depend much on overall flow rate, but above the threshold of Re ~ 0.6 (or log Re ~ -0.25) you start getting more uniformity as you have more overall flow. Now the question is: what Reynolds numbers correspond to realistic V60 preparations ? Are we in the regime where flow has an effect on uniformity or not ?

To answer this, I used the geometry of Hario’s plastic V60 with my typical 22 grams dose of coffee and the properties of water at a typical V60 slurry temperature of 90°C (194°F – this corresponds to a kettle set to boiling) to translate this into a V60 dripping rate instead, in grams per second. The threshold below which flow has very little effect on uniformity (Re ~ 0.6) corresponds to a V60 dripping rate of ~ 0.2 g/s, which is extremely slow. If we transform the x-axis of the figure above to talk about V60 dripping rate, and plot it in linear rather than log space, we get this:

I removed a few data points in the “low flow” regime for visibility because they were very crowded.

If you want to measure your V60 dripping rate you need to use a brew stand and weigh your beverage rather than the total water, and see how fast it goes up with time during your brew. To do this I use two Acaia scales (a Pearl and a Lunar) and a Hario brew stand (make sure your server is not too tall; I use the 400 mL Hario Olivewood one; apparently it’s only on Canadian Amazon) which allow me to build detailed brewprints like this one:

If you focus on the dark purple dashed line, you’ll see that my flow rate went from ~ 3 g/s when the V60 had the most water in it, down to ~ 1 g/s when it was almost empty, placing me right in the regime where flow rate affects flow uniformity, and therefore extraction uniformity, quite a lot.

Here’s why I think this is really interesting: this could explain why brews suddenly become astringent when we grind too fine, even if no channels were physically dug into the coffee bed by the flow of water. I think it would be confusing to call this effect of low-velocity non-uniform flow “channeling”, and I’d rather keep this word for situations where the coffee bed is eroded by water and coffee particles are pushed away to form a channel. Rather, I’d prefer to speak about this as “flow uniformity”, or its direct consequence “extraction uniformity”.

Speaking of which, there is one major limitation to the computer simulation these authors made: it treats the bed of coffee as a fixed and immovable object. Therefore, no bed erosion can occur, and no channels can be dug by water. This is why their simulation tells us that “the fastest flow is always best”, which may have you want to apply 150 bars of pressure on your pour over. If you did this however, you’d find that your coffee bed would quickly erode and channel pretty badly, resulting in a super astringent brew (and probably an exploded coffee server). Espresso brewing often faces this challenge: you don’t want flow to clog, but you also don’t want to destroy your coffee puck by eroding it with a very large flow and pressure. This is partly why puck preparation became very important in espresso brewing, as a way to make the coffee bed structurally more robust against erosion.

That’s a lot of information, so I think it would be good to remind ourselves of all the possible sources of non-uniform flow can be:  

  • Classical channels, i.e. water pushed away coffee particles to form a void space. Those channels will appear more easily if coffee particles are lighter (therefore smaller), and may be visible from the formation of hollows at the surface of the coffee bed. This will also happen more easily if the global flow of water is too intense by applying a lot of pressure on it, and can be mitigated by compressing the coffee bed with puck preparation like we do when pulling espresso shots.
  • The uniformity of your grinder’s particle size distribution will directly affect flow uniformity because it governs the uniformity of void spaces between the particles.
  • A flow that is too slow, either from filter clogging or a coffee bed resistance that is too high, will make the flow of water less uniform even in the absence of classical channels.
  • Clogging your filter will also likely not happen everywhere at once on the filter, causing the flow to be even less uniform because it will only pass where the filter wasn’t clogged.
  • Poor blooming that leaves dry spots in your coffee bed will also make your flow less uniform, because the coffee bed will have more resistance in these dry spots (dry coffee is more hydrophobic than wet coffee).

This realization made me think that maintaining a more stable flow of water through the coffee bed is crucial to get a good, uniform extraction. Here are a few predictions I think I can make based on the considerations above:

  • Applying a gentle pressure (or suction) on a pour over would allow us to grind a little bit finer without astringency, and therefore reach higher extraction yields, more particle size uniformity and better brews overall. I think this is only true up to a point, because if you apply too much pressure or grind too fine, then you need to care about puck preparation like for espresso.
  • Using James Hoffman’s continuous pour method rather than the two-pour method might produce more evenly extracted brews, because it eliminates a moment of slow water flow between the two pours where less water in the V60 is providing downward pressure. This is completely independent of temperature stability.
  • Using a warmer slurry temperature will make water less viscous, which will make it flow faster and therefore more uniformly.
  • Using too much water and cutting off the brew at the desired beverage mass may allow us to eliminate that final moment of slow water flow, and further improve extraction uniformity.
  • Using many pours will produce a less uniform extraction unless you compensate with a coarser grind setting. This is doubly true not only because less water in the V60 will be pressing down on the coffee bed, but also because the slurry temperature will be lower and water will be more viscous.

As you can imagine, I’ll now definitely try James Hoffman’s pour over method, and I will also investigate whether cutting off a brew produces a better coffee ! I’ll also pay a lot more attention to my V60 dripping rate and the coffee bed resistance that I calculate for my brews.

Measuring Coffee Concentration with a 0.01% Precision

[Edit October 14, 2019: After having used the method described below for about a month, I decided to add a few steps to ensure that no liquid gets caught inside the pipette’s rubber cap. If you want to jump straight to the modifications, just use ctrl+F and search for October 14.]

Lately I have been a bit unsatisfied with how repeatable my measurements of total dissolved solids (TDS) concentration were with the VST refractometer. The instrument itself has a quoted precision of 0.01% TDS, but I found that the only way I could achieve such repeatability was to brew a few coffees in a row, let them thaw for about 40 minutes, then sample them carefully and measure them many times, as I did in one of my latest posts to assess my manual repeatability in brewing V60s. Lacking a good methodology to reach 0.01% TDS repeatability on every morning brew has held me back from properly characterizing the effects of V60 filters, grind temperature and a few other things.

There are two problems that will hold you back from measuring TDS with a 0.01% precision: evaporation and a sample not at room temperature. Today I want to present you with a method that I recently designed through trial and (lots of) error, which allowed me to reach that 0.01% TDS precision I was hoping for. I will just start with the answer first, and describe what method worked for me, but I’ll include more data to back up why it works, and some other things I tried for the more curious readers.

This method is intended for filter coffee, therefore it does not include the use of syringe filters. For that I would recommend using Mitch Hale’s guide on refractometer usage, but it would probably be wise to add a step where you cool down your sample with a larger, thermally conductive pipette (I’ll talk more about the details below). A larger pipette would be needed, because the syringe filter and plastic syringe both need to be “rinsed” with some coffee first to get a good measurement.

Required Equipment

The equipment you will need is the following:

  • A set of glass pipettes. You will need only one pipette but two of the red rubber caps that come with them. Metal pipettes would also work but they are very hard to find.
  • Two small elastics
  • A spoon
  • Access to a tap water faucet and a dry towel near your brew station.
  • A refractometer.
  • Distilled water.
  • Rubbing alcohol.
  • Two eye dropper bottles (facultative)
  • Small microfiber cloths for refractometer cleanup (facultative). You can also use tissues.

Preparing the Equipment

Place the two elastics around the pipette as shown below; you can leave them there between uses:

Note that the elastic close to the outer end of the pipette is not exactly at the tip, so that you can avoid dipping the elastic in your coffee. Both elastics serve to prevent water from sticking to the bottom of the pipette surface during rinsing and travel all the way to either end of the pipette. You may also notice that I mounted the rubber cap on the “wrong” end of the pipette. I actually prefer it this way because the larger opening on this end creates less froth when putting a sample of coffee on the refractometer lens.

The Detailed Steps

(1) Put distilled water on the lens

  • Use a generous amount of distilled water to make sure the lens is at room temperature.
  • I recommend using an eye dropper bottle to keep a small amount of distilled water handy.
  • Ideally wait a minute or more so that the temperature of the lens is at equilibrium with the distilled water; I do this step before I start brewing to save time.
  • Close the lid to avoid contamination.
  • You can blow gently on the surface of the distilled water if you don’t have time to wait a few minutes.

(2) Sample your coffee with the glass pipette.

  • Don’t plunge the elastic in.
  • Stir thoroughly with a spoon as you’re sampling (I don’t think swirling creates enough vertical mixing).
  • Season the pipette by sampling, dropping back in the coffee pot (or elsewhere) then sampling again.

[Edit October 14, 2019: At this step I highly recommend wiping the tip of the pipette dry with a clean, dry cloth. I also recommend you verify the loose rubber cap inside is dry by pressing it against your wrist or finger, while blocking most of the spout. If there’s water or coffee in it, you should feel or hear it hissing, and in this case you should rinse it and place a dry tissue inside it for a few hours. I keep a couple spare rubber caps around in case that happens.]

(3) Rinse the pipette for 30 seconds.

  • First put a rubber cap on the pipette spout to avoid diluting your sample.
  • Open a gentle stream of cool tap water. Use the photo further below as a reference.
  • Hold the pipette by the two rubber caps, with the spout end a little bit higher.
  • Place the pipette close to the top of the water stream to avoid any water getting on either end.
  • Gently move left and right and rotate the pipette.
  • Do not rinse for longer than 30 seconds or you might make your sample too cold.
In the photo above I was experimenting with a metal pipette, but the rinsing is done the exact same way.

(4) Dry the pipette thoroughly with a towel.

(5) Leave the pipette to rest for at least a minute, ideally for 2-5 minutes.

  • Waiting allows the sample to reach room temperature even if the tap water made it a bit too cool.
  • I usually take this time to taste the coffee and note down my impressions without being influenced by the measured TDS, and then to zero the refractometer.

(6) Zero the refractometer.

  • You can optionally measure the distilled water temperature with a bead K-type thermoprobe right *after* zeroing if you want to test how well you are applying the method.
  • Do not rezero after putting the probe in because you might have contaminated the distilled water. Dry the probe thoroughly.
  • I do not know whether the VST refractometer keeps its zero when it automatically shuts down.

(7) Dry the refractometer lens.

  • You can very gently blow on the cleaned surface to favor evaporation.
  • You can look at an angle to see light reflecting on the lens; this makes it easier to see if any water droplets remain (see the photo below).
  • Use a clean and ideally absorbing cloth. A tissue can work. Be careful not to scratch the lens if you’re using Atago; VST has a sapphire lens so you won’t scratch that with conventional stuff.
  • I use cloths and keep one for water or alcohol only, then I dry it between uses. I use a different one (or a tissue if I’m out of them) for wiping out coffee.
Droplets of water can still be seen on the refractometer lens in the two photos above.

(8) Place 3-5 drops of cooled coffee on the lens and immediately close the lid.

  • I recommend discarding 1-2 drops before putting a sample on the lens. This will help ensure that your sample is not contaminated with tap water.
  • I recommend placing the drops around the lens on the metal ring, to stabilize temperature as much as possible before the sample touches the lens.
  • Scott Rao recommends 3 drops only to avoid shifting the lens temperature too much, but here we already cooled down the sample so it is not as crucial. I still recommend using as little drops as you can as long as you obtain a small pooling of the sample on the lens, but in my experience up to 5 drops can be needed because cool liquid is more viscous than warm liquid.
  • Evaporation is not an issue because the sample is cool; it happens extremely slowly in the absence of airflow.
  • Do not blow air on the coffee sample.

(9) Wait at least 30 seconds and not more than 5 minutes.

  • Don’t wait any longer because sedimentation or evaporation *could* become issues.

(10) Measure your TDS.

  • Take several measurements. It is possible that your TDS will slowly go up if your sample is too warm, or down if it’s too cold. Ideally it shouldn’t shift by more than 0.03% TDS with this method.
  • If your readings shift by more than ~0.03% TDS there is a risk that the lens temperature converged to a higher temperature than what you zeroed it at (causing a lower TDS reading), and if your sample was too warm there is a risk of evaporation (causing a higher TDS reading).
  • If you want to verify that your sample temperature is the same as your distilled water when you zeroed, do this right after measuring a converged TDS with a bead K-type thermoprobe. Clean it with a drop of alcohol after doing so.

(11) Clean up the equipment.

  • Wipe out the coffee sample; I use a clean cloth for this and put it in the laundry after 1 use, otherwise it can accumulate coffee oils. Tissues can work for this.
  • Add a few drops of alcohol then wipe again.
  • Clean up the pipettes by sampling hot water and leaving them to dry; you can also occasionally sample alcohol to do a deeper clean.
  • Keep the refractometer lid closed to avoid having anything contaminate the lens.

If you put hot coffee on the lens, I recommend cleaning it up and then putting as much distilled water as in the photo further above for a minute, wipe it and repeat this three times. This will allow the lens to reach back room temperature again. Once you have done this, you can start over (and don’t skip the re-zero step).

[Edit October 14, 2019: I recommend leaving the loose rubber cap downward so that any liquid inside of it would slowly drip out over time. I usually place it leaning against the wall to do this.]

The Effect of Temperature on TDS Measurements

At this point you may be asking: “Ok, but how important is it really to have the right sample temperature ?” The answer is: REALLY important ! To illustrate this, I purposely put a warm sample on my refractometer lens and measured its TDS several times as it cooled down.

The room temperature was 75.7°F, and that was also the temperature of the distilled water I used to zero the refractometer. The correct TDS for this beverage was 1.45%; as soon as the sample temperature departed by 0.2°F or more, you can see differences of 0.01% TDS or more ! Only a small 5°F difference can cause you to underestimate your concentration by as much as 0.08% TDS ! The take away for me is that I should try to always measure my sample within 0.2°F of the temperature at which I zeroed the refractometer with distilled water, otherwise my accuracy will be worse than 0.01% TDS.

The Effect of Evaporation on TDS Measurements

To investigate the impact of evaporation on our TDS measurements, I tried two different case scenarios that resemble what I usually did when I cooled a sample of coffee with a ceramic ramekin. I put a small sample of water at 160°F on a milligram-precision scale and noted how fast the weight went down as water evaporated. In the first case, I placed 10.210g of water in the small plastic container that comes with the scale. In the second case, I placed a smaller 4.211g sample in a tiny stainless steel cup with a similar opening surface that I found in my plumbing stuff; in the second scenario, the sample will cool down much faster because it has less mass and is less insulated, so that will show us the effect of how fast the sample temperature drops on its rate of evaporation Here’s what I found:

It seems like the different cooling rates drastically affected the evaporation rate. The smaller sample placed in the metal cup, which cooled much faster, suffered much less evaporation. On top of that, the evaporation rates of both samples kind of leveled off as they were cooling.

But the more interesting question is how that would affect our TDS readings. If we simulate a 5 grams sample that we would be putting apart for cooling down, here’s how TDS would creep up with time with these two evaporation rates:

We can see that in the slower cool down case, TDS went up by a bit more than 0.01%, which is not great. Therefore, we want a strategy that allows us to quickly cool down a sample, ideally in an enclosed container with very little surface exposed to the air. This is why I think a pipette is a great place to cool down our sample !

Some More Data to Back up this Method

To verify how efficient this method is, I brought some tap water to 160°F, typical of my warmest coffee temperatures immediately after brewing (I use a Hario glass server; although mine is smaller with a 400 mL capacity), sampled it with a glass pipette while imitating the seasoning step, and stuck my Bluetooth K-type bead thermoprobe in the pipette then rinsed it with tap water. Here’s what I obtained:

I highlighted the moments where tap water was touching the pipette; as you can see I got distracted and missed it for about two seconds in the middle. Removing that small gap, I needed exactly 29 seconds to reach room temperature.

What’s also neat about this is that the result is not too sensitive on starting temperature, because the sample cools faster the larger the temperature difference is (that’s basic thermodynamics).

Let’s compare that with my previous method, where I left my sample cool in a ceramic ramekin:

Not only the ramekin causes evaporation, but it didn’t even get close to room temperature even after 8 minutes !

For a few days now I have been measuring the temperature of my distilled water sample immediately after I zeroed, and that of the coffee sample immediately after I measure TDS, to ensure that the method above allowed to reach similar temperatures. Here are the results I obtained:

  • Brew 1: zeroed at 72.5°F, measured at 72.6°F.
  • Brew 2: zeroed at 76.3°F, measured at 76.6°F.
  • Brew 3: zeroed at 75.6°F, measured at 75.5°F.
  • Brew 4: zeroed at 74.4°F, measured at 74.2°F.
  • Brew 5: zeroed at 75.1°F, measured at 75.1°F.
  • Brew 6: zeroed at 75.5°F, measured at 75.5°F.
  • Brew 7: zeroed at 74.9°F, measured at 75.2°F.
  • Brew 8: zeroed at 74.3°F, measured at 74.2°F.

As you can see, the method worked quite well ! Despite my morning zombiness, I had an average difference of 0.14°F and a maximum difference of 0.3°F, which is just enough to affect TDS by 0.01% !

Failed Attempts and Other Methods

As I mentioned above, there are a few more things I tried that kind of failed, or weren’t really useful.

Metal Pipettes

I initially wanted to use a metal pipette rather than one made of glass because metal is more thermally conductive. I totally failed to find any metal pipettes that are not crazily expensive, so I changed my focus and looked into metal syringes and turkey basters instead. So, I went ahead and ordered the following horror on Amazon:

It did kind of work, but made a huge mess especially because the piston is gigantic; you could probably syringe up half a V60 brew with it. Having such little liquid in a potentially large encasing worried me that evaporation could be an issue, but the even more annoying part is that the piston doesn’t hold coffee very well if you release it, so you get a lot of spillage.

But then I realized that the syringe needle was about exactly how I wish a metal pipette would be, and that the rubber caps that came with my glass pipettes actually fit perfectly on it ! So there you have it, a perfectly fine metal pipette:

After playing a bit with it, I think it did a perfectly fine job, but it actually doesn’t make it that much faster to cool down the sample, and you lose something that I realized I like a lot: seeing the coffee sample inside the pipette. The glass pipette made it much easier to make sure I hit the sample with tap water, that no tap water entered the pipette, and that no condensation was forming inside the pipette.

In the figure above, you can see that the metal syringe also took about 30 seconds of tap water to reach room temperature. This is not an improvement over the glass pipette, so I decided to stick with glass. This may indicate that the flow of water, not the thermal conductivity of the pipette, is what determines the efficiency of this cooling method in this range of materials. Rest assured that using a plastic pipette would shift that balance and make the method way slower, because plastic is an excellent heat insulator.

The Patience Option

I also decided to measure how long it would take to just let each pipette reach room temperature without taking any action, for the most patient among us. The results surprised me at first:

The glass pipette was faster ! I think this is due to the glass having more thermal mass, so it initially takes up a lot of heat from the sample very fast before the full pipette+coffee system need to cool through very slow air conduction. For the metal pipette, the whole thing must happen via air conduction because metal has a very small thermal mass.

If you’d like to do this with samples of about half a mL, the glass pipette took 12 minutes 16 seconds to be only 1°F warmer than room temperature, where the metal one took 17 minutes 38 seconds. I’m giving you the time for an acceptable 1°F difference because actually reaching room temperature takes a very long time (this is an asymptotic process).

The Aluminum Monster

There are a few more things that I tried which involved the freezer. I wrapped the glass pipette in aluminum foil to create an aluminum sleeve and wrapped the outer parts around a few scotch rocks to add thermal mass, and put that device (without the pipette) in the freezer.

You really don’t want to take a pipette out of the freezer because water from surrounding air will condensate everywhere on it, including inside it, which would contaminate your sample.

I tried taking out the aluminum “monster” from the freezer and inserting the glass pipette in it, and here’s what I got:

As you can see, there’s a huge risk of overshooting that is involved, in addition to the method being less practical because you need to build an aluminum monstrosity and let it cool in the freezer between every cup of coffee. I tried varying the amount of scotch rocks (down to zero), but it was still prone to overshooting, and generally much slower than tap water at doing its job.

The Faucet Cooling System

The next thing I tried was the over-zealous, “hide it from your friends” cooling system: I wrapped aluminum outside a glass pipette and connected both ends to 1/4″ rubber tubing, then wrapped the same part of aluminum also around my usual glass pipette. This created a sleeve where I could put my glass pipette in thermal contact with another glass pipette that is part of the rubber tubing:

I then cut the corner of a small 4×6″ vac sealable bag, wrapped a rubberband around the tubing and fixed it inside the vac sealed bag, to create a flexible inlet for the tubing. I wrapped the vac sealing bag around the faucet and held it tight in place with one hand and turned on the tap water to get water running through the system, then out in the sink.

This actually didn’t even spill or explode, which still surprises me. I had to open the faucet gently, but the problem with this system (besides you looking insane) is that the thermal contact between the two pipettes is not great, so it takes a lot more time to cool down the sample, about 10 minutes !

I hope you enjoyed reading this, I certainly had fun messing up my kitchen !

An Investigation of Kettle Temperature Stability

Lately I received the kettle Brewcoat that I ordered a few weeks back; I previously didn’t dare order one because they don’t make any for my Brewista artisan gooseneck kettle and they’re not cheap. Thanks to your support, I decided it was worth trying it and it would provide us with a “worst case scenario” of how a loosely fitting brewcoat improves kettle temperature stability. I went with the “Black felt/Black Polar Composite” version; I picked the Bonavita 1.0L kettle model because the size is very similar to the Brewista, and I was delighted that it fits quite nicely by adding just two pins:

The back of the kettle is the part where the fit is worst, because the Bonavita has its handle connected to the bottom of the kettle where the Brewista doesn’t:

I know, I scratched my kettle a bit 😢

If you are a Patreon backer you might also know that a while back I had ordered a small sheet of aerogel, which is one of the most insulating materials that are known. I decided to use it to add an insert to the Brewcoat for even more crazy insulation:

If you look carefully on the image above, you can see the additional layer of aerogel under the Brewcoat.

To investigate how each layer affected the kettle stability, I used cool Montreal tap water to rinse the kettle thoroughly and bring it down to the tap water temperature, then placed it on its turned off base with the Aerogel + Brewcoats with exactly 600.0g of cool tap water (this is the quoted capacity of the Brewista). The ambient temperature was 22°C (72°F) during the whole experiment. I made sure that the kettle lid was closed correctly and inserted my bead temperature probe in the vent holes of the lid all the way in to make sure the probe touched the bottom of the water.

I logged the temperature curve with the Thermoprobe BlueTherm One Bluetooth device (which is NIST calibrated to a precision of 0.7°F; a purchase made possible by the support of my Patreon backers !), turned on the kettle base and immediately pressed the “Quick Boil” button. When the temperature hit 212°F on the probe, I turned off the kettle base completely and waited for the probe to cool down to at 192°F or lower. Once that was achieved, I exported the temperature curve as CSV to build the figures and comparisons below.

I then repeated the exact same experiment with the Brewcoat only, and then using the kettle without any insulation. Between each experiment I thoroughly rinsed the kettle, temperature probe and kettle lid with cool tap water to bring its temperature down, then threw away the water and filled it again with cool tap water. Here are the resulting temperature curves, after I stitched their time axis to remove small delays from my manual inconsistencies:

You can immediately see that the “no insulation” case is much worse than the others ! In particular, it is very hard to keep the non-insulated kettle above 200°F, which is very consistent with my experience, as I constantly need to press the “quick boil” button between every pour during my V60 brews.

Adding the Brewcoat layer immediately makes things a lot better; there is no obvious gain in the time required to reach boiling temperature faster, but once this point is reached the cool down rate is massively reduced ! Even if you are using a kettle that doesn’t require you to constantly press “boil” every time you pick it up from the base, I suspect it will still have a hard time remaining close to 212°F unless the kettle is insulated with more than a thin layer of metal, because according to the purple curve above that will require a constant and significant energy input; this is also not very eco-friendly.

Another point that immediately becomes clear with the figure above is that adding a layer of aerogel provided significantly diminished returns, and would only be worthwhile if I planned to leave the kettle off for much more than 15 minutes. Maybe adding this aerogel layer could provide small energy savings over just the brewcoat in a coffee shop environment, but I doubt they would be significant.

I also built a small table to compare difference performance metrics of the three cases I experimented with:

In this table, I calculated the median upward and downward slopes in the heat and cool down phases for each case. As we saw before, the heat rate isn’t significantly faster with additional insulation, but the cool down rates are significantly different with either types of insulation.

I also included how much time is needed after the kettle is left turned off from boiling point to reach 200°F and 192°F. The non-insulated kettle fell to 200°F in only 34 seconds ! This is not great, to say the least. The next few lines indicate how many degrees are lost for a 10 seconds, 30 seconds and 1 minute waits after the boiling point is reached if the base doesn’t immediately power the kettle back (e.g., if you forget to press “quick boil” again on the Brewista kettle). This also definitely applies for the duration of your pours, because no kettle can keep receiving energy as it’s off the base ! If you pour for a duration of 30 seconds, a non-insulated kettle will already have lost a whopping 10°F – that really surprised me, and it makes me wonder why V60 kettles don’t already come with an additional layer of insulation !

One other thing that I noticed during this experiment is that the Brewista temperature indicator is not always reliable. Without any insulation, the true temperature is almost always 20°F cooler than what the kettle base indicates, and the base indicates 212°F when the true temperature is at barely 196°F. If you wait enough to hear the water vapor hissing out of the vents however, then the true temperature is in the range 211-212°F, but that happened only about 15 seconds after the base indicated 212°F for me, so pay attention to sound not just the temperature reading of your kettle.

When I used only the Brewcoat, then the kettle base temperature was reliable within a degree, until I got past 200°F, where it got gradually worse; it indicated a temperature about 2°F higher than reality by 202°F, and the difference increased up to 6°F when the base indicated 212°F as the true water temperature was only 206°F. However, I only had to wait a few more seconds for the true water temperature to hit a stable 212°F as the water vapor started hissing through the vents. As far as I could tell, the case with a Brewcoat plus an aerogel layer was very similar.

I hope you found this as interesting as I did; it turns out we should worry about kettle insulation if we want to achieve the highest possible slurry temperatures in our V60s ! I will be gathering some more slurry temperature curves in the upcoming weeks, and I fully expect to see an increase of at least a few degrees, which is great because we are still much under the 205°F threshold where, in my syphon tests, brews started to taste worse.

A Tool and Videos for Crafting Custom Brew Water

In this post, I discuss the composition of water that we use to brew coffee. If you are new to these discussions, I strongly recommend that you first read this previous post about brew water.

Barista Hustle recently released a very clever Excel calculator to determine the amount of mineral concentrates needed to craft brew water recipes starting from soft tap water instead of distilled water.

I thought this was a great idea, and decided to make a similar tool for those like me who prefer to use a single concentrate. While I was at it, I made it in a way that allows you to use more minerals on top of epsom salt and baking soda, which allows to control the concentration of magnesium, calcium, sodium, sulfate and bicarbonate ions individually, instead of just hardness and total alkalinity.

We are only beginning to understand the effects of magnesium, calcium and sodium ions on how fast different chemical compounds extract; this is discussed a little in the Barista Hustle water course, but so far I have not seen much more information about this elsewhere. However, I have never seen much discussion on the effects of sulfate ions on the resulting coffee taste or composition. This new water crafting tool could allow us to experiment with it while keeping everything else fixed.

The uses of this tool go even further; if you have reverse osmosis water with non-zero mineral composition, you can adapt your concentrate to still get the proper brew water composition. You could even use it to craft custom brew water starting from soft water bottles.

One thing this tool won’t allow you to do is create brew water that is lower than your tap, reverse osmosis, or water bottles in either total alkalinity, hardness, or individual ionic concentrations. This is because doing so by adding minerals is just not possible (maybe that would be possible with reactive compounds, but let’s not go there).

This is what the tool looks like in Google Sheets

The tool I built is a Google Sheet; if you are new to Google Sheets, keep in mind that you won’t be able to modify it before you create your own copy, with File/Make a Copy. Asking me for edit permissions won’t work; doing so would modify the sheet for every other user. You can find the tool here.

Make sure to read the header instructions of the sheet. There are a few different versions of the calculator for those who don’t have access to all minerals.

I thought this was also a good moment to release publicly two of my previous Patreon-only videos related to brew water. In the first one, I filmed myself crafting a single batch of Rao/Perger brew water concentrate; you can find more explanations about the required material here;

In this second video, I use the concentrate to prepare a 4L container of brew water, starting with distilled water; you can find some more information about it here;

[Edit Nov 27, 2019: David Seng just let me know that he also built a water crafter page on his website. You should definitely check it out, as it seems very helpful and even includes the Langelier saturation index for scale and corrosion !].

I’m hoping this will help you brew better coffee, hopefully with a little less hassle for those of you lucky enough to live in a soft water area !

An In-Depth Analysis of Coffee Filters

A while ago, I decided to purchase a relatively cheap USB microscope to see what V60 filters look like. This is one of the first images I took of a Hario tabbed paper filter:

I was really pleased that the microscope had enough resolution to see the filter pores ! This opened up the exciting possibility of characterizing the pores of coffee filters, and determine which ones are optimal for pour over brews. One thing that became immediately apparent is that the pores are not circular, and they don’t seem produced by a perforation of the paper membrane, instead they  just seem to naturally occur from spacings between piles of paper fibers.

When I saw that nice image, I immediately grabbed a Hario tabless paper filter and took another image:

As you can see, this one is less immediately interesting, we can barely see the pores ! After being a bit bummed out about this, I realized it was simply caused by the tabless filters being quite thicker, which minimizes the contrast of the microscope’s LED light bouncing off the filter surface. Fortunately, it’s possible to fix this with a bit of image analysis. To do this, I wrote a code that re-adjusts the contrast of the image so that its pores become more apparent:

By that point, I realized that a proper filter analysis was indeed possible with this microscope, and things started to get really fun. I gathered this list of filters from various manufacturers:

Now, before we start discussing the actual analysis, I’d like to show you what each of them look like under the microscope.

Hario Tabbed Bleached Paper Filters for V60

Hario Tabless Bleached Paper Filters for V60

Hario Tabbed Unbleached Paper Filters for V60

Cafec Bleached Paper Filters for V60

“Coffee Sock” Cloth Filters for V60

Aeropress Bleached Paper Filters

Aesir Bleached Paper Filters

Chemex Unbleached Paper Filters

Chemex Bleached Paper Filters

Osaka Metal Filter for Chemex and V60

Hario Unbleached Paper Filters for Siphon

Hario Cloth Filters for Siphon

Calibration of Image Scale

Before these images can be used in a more quantitative analysis, the size of each pixel must first be determined. To achieve this, the microscope comes with a small calibration plastic that looks like this:

As you can see, there are many options that can be chosen from. I highly suspect that the printing standards for this calibration unit are not particularly great, so I decided to choose the grid in the middle of the calibration plastic; I chose it because it provides many measurements of the scale at once, and it seems much easier for the manufacturer to get the spacing between printed lines right rather than the thickness of a line. I took seven images of this grid at slightly different positions. These images each look like this:

These lines are marked as 0.1 mm (100 micron) wide. You can already see from the image that the line spacings are not perfectly uniform. There are also small defects on the image caused by imperfections in the plastic. I chose to take the median value of each row (a vertical median) to create a 1-dimensional signal of this grid, which as you can expect looks like an up-and-down pattern (dark pixels where a line falls, white pixels otherwise). I then used what is called an auto-correlation of that signal to determine by how much it can be shifted before lines overlap with each other. I did this on the seven images that I took; I then took the average pixel scale as my best measurement, and the standard deviation as the statistical uncertainty in my measurement. This measurement error does not include any systematics. For example, if the manufacturer actually printed a pattern of lines averaging 110-micron wide spacing, that 10 micron systematic error won’t be included in my error estimate. Because I have no way to know about such systematics, I just ignored them.

I also repeated a similar analysis with a vertical median instead of a horizontal one, to check that the pixel sizes are the same in the vertical and horizontal directions. Here’s what I found:

  • Horizontal scale: 67.59 ± 0.09 micron per pixel
  • Vertical scale: 67.54 ± 0.07 micron per pixel

As you can see, the two values agree within the error bars, which is encouraging. Therefore I assumed that the scaling is the same in both directions, and combined them together to obtain a final image scale estimation:

  • Combined scale: 67.56 ± 0.08 micron per pixel

Analysis of Pore Distributions

Now it’s time to get even deeper in the technical details. As I mentioned, one of the more useful things to do with these microscope images is to determine the uniformity and quantity of pores in each filter. To do this, I opted to do some image smoothing with various bandpass sizes.

The unbleached paper filters I analyzed are brown rather than white. Because I don’t want color to affect my results or make it harder to bring out the contrast between the filter surface and its pores, I experimented visually and determined that adding up 100% of the red channel and 50% of the green channel was a good way to mitigate the effect of brown color on the detection of filter pores. I used none of the blue channel, because brown is a color that contains very little blue in it, and this means that the undesirable brown-white variations in color across the surface of an unbleached filter are maximized in the blue channel.

Here’s what an original color image of a Hario unbleached filter looks like:

If we look only at the (contrast-scaled) blue channel, variations in brown shade will be very obvious:

If instead we looked at the combined R+G+B channels, these variations would get diluted a bit:

But taking the red channel plus half of the green channel gets us something that removes these variations even more:

As I mentioned before, an important step is to re-normalize the image contrast in order to see the pores clearly regardless of filter thickness. In astronomy, I need to do this all the time and by experience one efficient way to do it that is robust against outlier pixels is to subtract the 0.5th percentile of the image everywhere (i.e., subtract almost the smallest image value), then divide the image by its 88th percentile (i.e., divide by almost the largest image value). I then set any outlier pixels darker than zero to exactly zero, and any outlier pixels brighter than 1.0 to exactly 1.0.

Here’s what the image above would look like before applying such a contrast normalization:

The pores are much harder to see in the image above, compared to this one where the contrast was normalized:

There is another neat trick that can be used to remove large-scale variations across the image very efficiently, as long as they are larger in scale than the largest possible pores. Basically, you divide the original image by a smoothed version of itself, and this brings out only the small-scale variations across the image. I used a Butterworth filter to do this; it uses a slightly different bandpass to smooth the image compared to the more typical Gaussian smoothing, but I found that it was better at preserving the exact pore shapes. In all cases I removed only the 10% largest spatial frequencies in all images with this step.

Here’s how the Butterworth filtering affects the image above:

As you can see, this removed a lot of the variations caused by creping or shadows.

Another step I took is to blow out the image resolution by a factor 20 using an interpolation algorithm. This allows me to measure pore sizes at the sub-pixel level, and obtain smoother pore size distributions with more data points in them. The next step to detect filter pores is to choose a threshold to separate a pore from the filter surface. I used a threshold of 0.5, which means that any pixel darker than half of the image scaling is considered a pore. You can see visually what this results in, with all detected pores marked in red:

At that point, I simply counted the fraction of pixels that were marked as pores in this image.

if you are not interested in the details of how I coded the construction of pore size distributions, you can skip the next paragraph, and the equation !

To do it, I used the magic of infinite numbers in coding. In some coding languages, those are called “Not a Number” (NaN), and they can either be your worst enemies because they crash all of your software, or your best friends because you always keep them in mind and ensure your codes don’t crash when they are encountered. Believe me, they should be your friends, because they open up a lot of nice coding tricks. One of these tricks is the following: You can create a mask image that has a value of 0.0 at every pixel corresponding to the filter surface, and NaN at every pixel where there is a pore. You can then use some fast and well-vetted box-smoothing algorithms to look at the larger scales in the image, and this will cause the filter surface to slowly creep inward and close down the detected pores.

Do this with many different smoothing box sizes (let’s call such a box size x), and you will gain information on the fraction of filter pores at every size ! Another neat trick about the dynamics of how NaN values creep inward is that they will give you a list of pixel locations where square particles of a maximum radius of exactly x can pass through the pores; normal smoothing algorithms would underestimate what size of particles can pass because they would blur the edges of filter pores. If you count the fraction of masked pixels (let’s call that m) for every box smoothing size (recall that we named this x), it can be demonstrated mathematically (I will spare you the details) that the distribution of pore radii f(x) is related to the second derivative of the masked fraction versus smoothing box size:

where p is the pixel scale (in pixels per micron).

Basically, how much the fraction of masked pixels changes as you are smoothing the image gives you an indication of how much pore surface is being closed down.

I found this algorithm efficient to quickly measure pore sizes regardless of their shapes across the image, and measuring m(x) is basically asking “If you take one squared particle of radius x, what is the fraction of surface positions where it could pass through a filter pore ?

These calculations resulted in a pore size distribution for each microscope image that I obtained. I then combined the distributions from every image of a given filter to an average pore size distribution for that type of filter. I displayed pore diameters rather than radii, because I suspect this is what most people will assume if they hear “pore size”. Here’s an example of what I obtained with the Hario tabbed paper filters:

As you can see, the peak of the distribution in terms of number of pores seems located below the spatial resolution of the microscope, but we will see later that this is not an issue given that we are interested in how the pore distribution affects flow rate, and we will see that the pores smaller than 10 micron have an insignificant contribution to flow for all the filters that I tested.

Here’s how the distribution of each filter compared:

As you can see, the Osaka metal filter has way more pores than the other filters. I find it more interesting to compare the normalized pore distributions, and to group them by brew method:

Pour Over

Aeropress

Siphon

As you can see from the distributions above, paper filters tend to have more uniform distributions in pore sizes (the slopes of the distributions are steeper). One thing I found really interesting is that all unbleached filters seem even more uniform. This hints that the bleaching process may be affecting the pore distributions of filters, possibly in a way that will hurt brew quality, but we’ll come back to this later.

The units of the distributions above can seem a bit confusing, as they are in number of pores per micron per millimeter squared. The “per micron” part is caused by these distributions being probability densities, i.e., you need to integrate their area under the curve to obtain a real number of pores, which will remove the “per micron” unit. The “per millimeter squared” is just the surface of the filter. If you integrate all of these distributions across all possible pore sizes, for example, you could count how many pores per millimeter each filter type has. With a slightly different operation, you can also calculate the fraction of each filter’s surface that consists of pores (I removed the metal filter to get a clearer figure):

Filter Thickness

It is obvious from manipulating all the filters above that they have very different thicknesses. This is an important property of filters because it will affect their flow rates. I thus ordered a digital Caliper with a 20 micron precision to actually measure the thickness of every filter. Precisely measuring the thickness of a paper filter is actually not as straightforward as you might think; if you close the Caliper too hard, the filter will get compressed and potentially damaged, and you won’t measure a realistic thickness in the context of water flowing through the filter.

To overcome this problem, I gently closed the Caliper on each filter to obtain a more realistic thickness, but this brings up a whole new problem of measurement reliability. Fortunately, I can easily repeat these measurements many times on different filter locations, and different filters, so I kept taking measurements until my error on the average thickness became much smaller than the quoted 20 micron precision of the Caliper. Stats geeks will know that this error on the average can be calculated with the standard deviation of all values divided by the square root of the number of values.

I ended up taking a total of over 700 thickness measurements (across all filter types) before I was confident in my results. Here’s the list of filter thicknesses what I obtained:

  • Chemex unbleached: 167 ± 23 μm
  • Chemex bleached: 210 ± 22 μm
  • Hario unbleached: 203 ± 21 μm
  • Hario tabbed: 206 ± 21 μm
  • Cafec: 207 ± 21 μm
  • Hario tabless: 242 ± 22 μm
  • V60 cloth:  690 ± 22 μm
  • Aeropress: 120 ± 22 μm
  • Whatman: 170 ± 22 μm
  • Aesir: 220 ± 22 μm
  • Siphon paper: 220 ± 22 μm
  • Siphon cloth: 645 ± 22 μm

And here’s the same data, displayed as a figure:

Filter Flow

Another important point about filter properties is how fast water flows through them on average. This is affected by factors like pore size distribution, filter thickness, but also their rigidity and how well they stick to the surface of a V60, because a better sticking filter will slow down the upward escape of air and therefore slow down flow. Because flow rate is a function of many complex and intertwined factors, I also measured them with a simple experiment further down.

We can however make a prediction of flow rate, based on an idealized planar filter with a uniform thickness and circular holes. The theory behind it is given in some details here, but basically the only part you need is this one:

where q is the flow rate in volume of water per second through a pore, r is the radius of the pore, and t is the thickness of the filter. The hidden proportionality constants are related to the pressure drop above and below the filter, and the viscosity of water. The first term in the third power of r is called the Sampson term, and corresponds to the case with a filter much thinner than its pore sizes. The second term is called the Poiseuille term, and corresponds to a case where the pores are actually tubes much longer than their diameter. This combination of the two extreme cases is not perfectly exact, but it’s much simpler than the real solution, and it’s always a valid approximation within 1% of the real value.

We can use the equation above to transform the distribution of pore sizes into a distribution of flow, and by integrating the full distribution we can estimate the total idealized flow rate for each filter. Here’s what I found, grouped by brew method;

Pour Over

Aeropress

Siphon

As you can see from the figures above, pores above ~20 micron are responsible for most of the flow in all cases. This means that my microscope resolution (each pixel is 6.8 micron) is able to resolve the holes most relevant to understand the flow dynamics. You might be surprised that the positions of the Hario tabless vs tabbed paper filters were swapped compared to the pure pore distributions (i.e., Hario tabless seemed a bit more uniform in terms of pore sizes, but less uniform in terms of flow); this is because the tabless filters have a slight over density of very large pores, which are much more important than small pores when we talk about flow. Hence in practice, this makes the Hario tabbed filters seem slightly preferable.

Here are the total idealized flow rates for all filters, obtained by integrating the flow rate distributions above:

After calculating these idealized flow rates, I went ahead and measured the real flow rates of all pour over filters above. I took the immersion dripper switch from Hario, put a filter on it and stuck a marked chopstick in the filter. I used 150g room temperature (25°C) distilled water to pre-rinse the filter so that it sticked to the V60 walls, turned off the switch so that it doesn’t flow, and added another 150g of room temperature distilled water.

I used an iPhone chronometer which allows you to place your finger on the “start” button and it actually only starts when you release your finger; this makes it easier to trigger both the Hario switch and the chronometer at the same time. Once the switch was opened, I used the iPhone LED light to pay careful attention to the water surface and I hit the chronometer again when it passed the black mark on the chopstick. I took 6 measurements per filter; this allowed me to get a better measurement and estimate my measurement errors with standard deviations. Here are the resulting filter flows:

  • Cafec: 5.79 ± 0.03 mL/sec
  • Hario tabless: 6.89 ± 0.05 mL/sec
  • Hario tabbed: 11.03 ± 0.02 mL/sec
  • Hario unbleached: 15.3 ± 0.1 mL/sec
  • V60 Cloth: 18.1 ± 0.3 mL/sec
  • Osaka metal: 67.8 ± 0.6 mL/sec
  • Chemex bleached: 7.23 ± 0.02 mL/sec
  • Chemex unbleached: 9.82 ± 0.02 mL/sec

The detailed data are available here. Keep in mind that flow can be affected by water viscosity, your grind size, filter clogging, etc.; so these values are most interesting when compared to each other in a relative sense. The error bars are mostly due to my ability to start and stop the timer at the right time; my standard deviation on timings across all filters was 0.2 seconds, and apparently the average human reflex delay is 0.25 seconds, so it seems credible that the reflex inconsistency be of that same order of magnitude.

Now let’s compare the idealized versus predicted flow rates, and see if they correlate well:

If the idealized flow rates were perfect, all filters would fall along a straight line in this figure. As you can see, it is not the case at all; it seems that filters made of different materials or with different creping behave differently. I think this is due in part to how they adhere to the walls of the V60, but I think that creping inside the filter may also contribute to slow down flow because water will prefer to flow mostly along the crepe valleys instead of everywhere on the filter surface. If that’s true, then filters smoothed on the inside would be preferable, as they would promote a more uniform flow across the filter surface. I won’t be able to determine whether that’s true or not with any more certainty in this post.

Another hypothesis I had is that the pores of paper filters may be better represented by diagonal tubes instead of straight up ones, in which case the “effective” thickness of the filter would always be a factor larger than their true thickness. While this may be true, I observed no clear correlation between filter thickness and how offset the idealized flow rate was from the real flow rate; this indicates that this effect is not the biggest cause for these differences.

The Tainting Effect of Filters

Another often discussed factor about coffee filters is how they might directly affect the taste of a coffee beverage by contributing chemical compounds to the coffee beverage. This is what can produce this undesirable papery or cardboard taste, and is often the quoted reason for why pour over filters need to be rinsed before brewing. To be sure, there are other reasons to do it; pre-heating the brewing vessel and making sure the filter is well positioned in it are also important reasons why we pre-rinse pour over filters.

I once did a preliminary experiment where I pre-rinsed Hario tabless and tabbed filters (both are bleached) and then immersed them in hot water for a few minutes, and tasted the water. I was not able to confidently say that I could taste anything different from just the tap water, so I concluded that I could use either of them without worrying about taste, at least if I pre-rinsed them.

But there is a more objective way to compare how much each different filter can taint your coffee beverage, with distilled water and an electric conductivity (EC) meter that measures total dissolved solids (TDS) in water. I decided to measure those by emulating a water temperature, contact time and water weight that are similar to typical brewing conditions. I put the dry filter in the Hario switch pour over device, turned off the flow, and poured 200.0g of distilled water (1 ppm) in the device, weighed with a 0.1g-precise brewing scale. I didn’t use more than 200g to avoid over-filling it. I then immediately put a cork lid on top of it for heat insulation, and waited 3 minutes before I turned on the flow switch.

I then placed the water in a small ceramic cup, which I covered with a plastic lid to stop evaporation. I waited a few hours until the samples came close to room temperature (I measured them at 27°C, and the room temperature was 25°C). I decided 27°C was ok because the TDS measurements had stopped changing between 40°C and 27°C, and waiting for the samples to cool more would have taken hours more still. The EC meter that I used applies a temperature correction, but it is not perfect so it’s best to remain within a few degrees of 25°C to get absolute TDS measurements. I made sure that all samples were measured exactly at the same temperature (27°C). To these measurements, I subtracted the 1 ppm solids that were already there in my distilled water. Here’s what I obtained:

  • Hario tabbed bleached: 0 ppm
  • Hario tabless bleached:  0 ppm
  • Cafec bleached: 1 ppm
  • Hario unbleached: 5 ppm

As you can see, bleaching actually does what it’s supposed to do, but for some reason, the Cafec filters seem to still have a small amount of dissolvable compounds left in them. These measurements are consistent with my being unable to taste any effect of the bleached filters in a water immersion, especially given that I had also pre-rinsed them. This also seems to lend credence to Scott Rao who didn’t pre-rinse the Aeropress or Whatman filters that he places on top of his high-extraction espresso pucks.

This all seems like a cautionary tale against using unbleached filters, but what I was testing here is the inherent ability of these filters to taint your coffee beverage if you don’t pre-rinse them. As I mentioned before, tainting is not the only reason we pre-rinse pour over filters, so I would certainly not recommend you stop pre-rinsing bleached filters, but there are scenarios like an upper espresso puck filter where pre-rinsing might not matter. Now what would be even more interesting to me is to ask a more practically applicable question: what happens to these numbers if we pre-rinse the filters ? And how much do we need to pre-rinse them ?

To answer these questions, I carried a different experiment that resembles two of the filter rinsing techniques that I used, applied to the Hario unbleached filters as a worst case scenario. The goal of these experiments was to see whether I can detect any additional dissolved solids imparted by the filter.

The first rinsing technique that I most often use is to first pre-rinse with cold tap water, because brew water is a bit more precious. I then pour a little bit of hot brew water to preheat the vessel, which is probably not that important as I use the plastic V60, but also to replace water suspended in the filter with my brew water that has the desired alkaline buffer. This is probably a bit overkill as the amount of retained water is small, but it’s an easy thing to do.

For the first experiment, I therefore used room-temperature distilled water (1 ppm) with the Hario switch, but this time I left the flow switch open. I poured three pulses of approximately 50 grams of water into three distinct cups, and then a final pulse of hot distilled water in a small ceramic container. I covered the ceramic container with a plastic lid and let it cool down to room temperature, so that I would get a more accurate TDS reading with the EC meter. Here’s what I obtained, after subtracting the initial 1 ppm to every measurement:

First pour (dry filter): Poured 50.3 g water, resulted in 5 ppm TDS in 47 g output, i.e. 3.3 g water was retained by the filter and 0.235 mg of filter material was dissolved.

Second pour: Poured 49.7 g water, resulted in 3 ppm TDS in 48.5 g output, i.e. 1.2 g water was retained and 0.146 mg of filter material was dissolved.

Third pour: Poured 49.9 g water, resulted in 1 ppm TDS in 49.6 g output, i.e. 0.3 g water was retained and 0.050 mg of filter material was dissolved.

Fourth pour (hot water): Poured 49.8 g hot water, resulted in 1 ppm TDS in 49.8 g output, i.e. no water was retained and 0.050 mg of filter material was dissolved.

This indicates that tap water removed most of the filter materials, but switching to hot water allowed to extract a tiny bit more, although I would be skeptical that 1 ppm of filter materials could be humanly tasted. It also seems that the filter was able to retain a total of 4.8 g water and contribute 0.481 mg of paper material.

For the second experiment, I used a similar method but used only hot water, and used 100 grams for my initial pour because I knew 50 grams would not be enough to remove all filter solids.

First pour (dry filter): Poured 100.0 g hot water, resulted in 10 ppm TDS in 95.8 g output, i.e. 4.2 g water was retained and 0.958 mg of filter material was dissolved.

Second pour: Poured 49.6 g hot water, resulted in 6 ppm TDS in 48.8 g output, i.e. 0.6 g water was retained and 0.293 mg of filter material was dissolved.

Third pour: Poured 49.3 g hot water, resulted in 0 ppm TDS in 48.5 g output, i.e. 0.8 g water was retained and no detectable filter material was dissolved.

In this case, it seems clear that 150g of hot rinsing water was enough to deplete the filter of all dissolvable materials. The filter seems to have retained a total of 5.6 grams of water, which is a bit more than the previous one. I don’t think this is due to the water being hot, but probably to how fast I moved the V60 when I picked it up. In this case, a total of 1.251 mg of filter material was dissolved, more than double what was dissolved in the first experiment. It’s therefore preferable to rinse unbleached filters with hot water, otherwise you’ll likely need a large amount of water. This is not true of any bleached filters; as you may recall, even the first pulses of water contained no dissolved solids.

Uniformity Index

One filter property that I expect could help achieve a uniform pour over extraction is how consistent the flow is across the surface of the filter. If pores are grouped in some regions of the filter, this will contribute to channeling, i.e. water will take preferential paths across your coffee bed and won’t extract in a very uniform way. To estimate this effect on the various filters I investigated, I calculated the total idealized flow in each microscope image that I took for a given filter and looked at the standard deviation of these values for any given filter. If a filter always seems to have a similar flow regardless of the position where I placed the microscope, that is a great thing, and it means that channeling should be minimized.

I defined the uniformity index (UI) simply as the inverse of the standard deviation of unit-less idealized flow across filter regions; this way, a higher UI corresponds to a more uniform filter. I did not assign any physical units to this index because it would require specifying the water viscosity, pressure drop across the coffee bed, etc. Hence, this index is only useful as a relative measure between filters, and it could not be compared with those calculated from any other pore detection algorithm or microscope. Here are the UI I calculated for all filters under consideration:

The error bars above are based on small number statistics. Remember that a higher UI is good; it means the filter will flow more uniformly across its surface. Notice how the unbleached filters come out at the top again !

As I already mentioned, I suspect that filters with a higher UI may produce a more uniform extraction, maybe even more so if you use small doses in your pour overs. However, what I do not know is the observable importance of this effect; the effect could be too small to matter in practice. This however opens up interesting practical experiments; for example, it seems possible that the Hario unbleached paper filters may allow us to reach more even extractions, and that would result in a higher average extraction yield when everything else is kept fixed.

Filter Clogging Index

A typical problem that one can encounter when brewing coffee is a sudden decrease in flow rate caused by very fine coffee particles clogging the pores of a filter. For this reason, the values of flow rate that I measured above must be taken with a grain of salt: if a filter flows very fast because it has very large pores, coffee fines might be able to clog them, and as a result the filter flow rate might decrease heavily during a brew, and depend strongly on your brew method (e.g. the volume of your bloom phase, whether you stir the slurry or not, etc.). If the filter flows fast because it is thin, the decrease in flow rate might be less important, and less sensitive to your technique.

It is possible to calculate a clogging index in an objective way, by calculating the overlap between the pore size distribution of a filter and the particle size distribution of a grinder. Obviously, different grinders (and different grind settings) will produce different amounts of fines, but using any reference grinder will provide a valuable relative assessment of how each filter is sensitive or not to clogging. The resulting clogging index will therefore be most important to consult when using a grinder that produces a larger amount of fines – in my experience, you generally get more fines with smaller burrs, conical burrs, or anything that can widen the full particle size distribution like misaligned burrs.

Not everyone seem to prefer grinders that produce the lowest possible amount of fines (so far, it seems to be my preference), but whatever your preference is, you should always try to avoid filter clogging. A filter will typically not clog in a uniform and immediate way, and that means you will get channeling as water starts to follow some preferential paths along the un-clogged filter pores. As you might already know, channelling will over-extract coffee along the channel paths, and cause astringency (a dry feeling in the mouth) in the resulting brew. Therefore, if you use a grinder that produces more fines, you should consider using filters that have smaller clogging indices. On the other hand, if you use a grinder that produces very little fines, this might be less important.

You might be aware that I wrote an app to measure particle size distributions, but I never used it in combination with a microscope, and without this it won’t be possible to use it to build a particle size distribution down to particle sizes as small as filter pores. In the future I will experiment with this, but for now I opted to use a laser diffraction particle size distribution instead. This distribution was generously sent to me by John Buckman and Scott Rao, who brought a sample of the Weber Workshops EG-1 (version 1) grinder with stock burrs to a laser diffraction device:

I used the web plot digitizer to extract data from that photo, and I calculated the clogging index of each filter by performing the following operation:

where f(x) is the pore size distribution of the filter and p(x) is the laser diffraction particle size distribution of the EG-1. The numerator includes a reversed cumulative density function of the pore size distribution because a coffee particle can contribute to clog any filter pore that is the same size or larger. Technically, the closest thing I’d know how to call this operation is the integrated product of a cumulative density function with a probability distribution function; it’s not a convolution.

If this looked like alien symbols and sounded like gibberish to you, that’s totally fine. Here’s what it comes down to: the clogging index is an estimation of the average fraction of flow that can be clogged by coffee particles smaller than 200 micron in diameter (100 micron in radius). If CI = 90%, it means that the average coffee fine smaller than 200 micron would be able to block pores that contribute 90% of the flow rate for a given filter, if you use enough coffee and agitate it enough. It does not correspond to the true fraction of flow that will be blocked by clogging, because that would depend on how much coffee you use and how large your filter surface is; some fines may be small enough to block 90% of the filter flow, but if they are not present in large enough numbers to block all the pores, or don’t ever come in contact with the pores, then that won’t happen. I don’t want to attempt to make these numbers representative of the true fraction of blocked flow, because not only it would become overly complicated, but it would also not be accurate because coffee filters are not idealized flat planes with circular holes. These numbers are however very useful to compare filters in a relative sense, to understand which filters might clog more easily than others.

Here are the CI I obtained for all filters tested here:

Remember that a small CI is good, as it means your filter is less sensitive to clogging. The trend of unbleached filters coming up as the superior ones seems to hold yet again !

Final Recommendations

I know this post contains a massive amount of information, so I’d like to distill some of what I learned into practical recommendations. Please keep in mind that those are based on limited experiments, which comes with its caveats: for example, it is possible that wetting the filters affects the pore size distribution, and it is possible that the more even pores across the filter surface only affects channeling in a very small layer of the coffee bed. Therefore the recommendations below should not be taken as an absolute truth, but rather as a guide for what seems most worth exploring with your next V60 experiments.

Metal Filters in General

Metal filters suffer from a major problem in my opinion: the pores are typically so large that the flow is only constrained by the grind size, and that will force you to grind very fine in order to obtain practical brew times. But this also means that a lot of fines will pass through the metal filter. Metal filters are also different from paper filters because they don’t filter with mazes of spacings between paper fibers, but rather just straight, large holes through an otherwise very uniform and flat filter. Therefore, metal filters won’t clog, and any fines small enough to penetrate the pores will end up directly in your beverage. This means metal filters won’t produce very clear brews like other pour over filters, instead they will produce a beverage with suspended solids and fines, with less clarity and more body. Personally, I’m not a fan of this.

Cloth Filters in General

Cloth filters suffer from a similar problem to the metal filters in terms of flow rate and large pores. However, they are in my opinion much worse because they are a hassle to properly clean and re-use. I wrote a lot more about this in my post about a high extraction yield siphon recipe, and I’d encourage you to read it if you want more information about proper management of cloth filters, but I just gave up on using them.

Paper Filters for Pour Over

Paper filters are much more interesting to me, as their smaller pores allow to prevent coffee fines from passing into the beverage. The coffee bed does a lot of the job at retaining coffee fines, but if the filter had larger pores, some of them (either those already at the bottom or those that migrated down) would still pass through.

One big take away point that I got from this analysis is that bleaching seems to deteriorate the quality of pore distributions. This is true in terms of the general spread in pore sizes, but even also in how much the flow of water varies across the surface of the filter.

This was surprising to me, as my initial bias was to disregard unbleached filters because of their tainting potential. But as we also saw above, it seems possible to remove all dissolvable solids with an adequate pre-rinse using 150 grams of boiling water.

I have not yet accumulated any practical evidence for this, but I suspect that using unbleached filters might both reduce channeling in all situations (because flow is more uniform across the filter surface) and make your brew more robust against clogging (because of the smaller number of large pores), all while having a slightly faster flow rate (because there are more pores per unit surface) !

The same conclusions hold for chemex, although their much larger filters almost certainly require more water during pre-rinse. I haven’t done this experiment, but based on their weight differences (5.4 g for Chemex and 1.4 g for Hario), I would recommend using about 4 times more rinse water.

Another result that surprised me was how the Hario tabless filters seem to be worse than the Hario tabbed filters on all metrics; they flow more slowly, have less uniform pores and are more susceptible to clogging compared to the tabbed filters.

The Cafec filters show all signs of being a weird case of filters that are bleached more gently; both their pore distribution quality and ability to taint water is in between the bleached and unbleached cases. If you are really afraid of paper taste or hate using a lot of water to pre-rinse your filter, they might be the optimal solution for you, but keep in mind that they flow much slower than other paper filters.

All of these conclusions can be visualized in the figure below, where I placed all paper pour over filters on a graph of clogging index versus flow rate; filters further toward the right are more easy to clog, and those toward the top flow faster. I also used larger symbols for the filters that have a more uniform flow across the filter surface, which means that larger symbols should be less susceptible to channeling regardless of your brew technique.

Aeropress Filters

For Aeropress, we really only have two contenders here, as I doubt people will start ordering and cutting Whatman filters for their Aeropress brews. But even if you had the motivation to do so, it seems that the Aesir filters come out at the top in terms of their robustness against both channeling and clogging. They do seem to flow slower however, because they are almost twice as thick compared to the standard Aeropress filters; they have more pores per surface area than Aeropress, but probably not enough to make up for their thickness. However, remember that an Aeropress brews have another variable that is not accessible to pour over; you can press harder, and make up for that difference.

It seems to me that Aesir filters are therefore more desirable for Aeropress brews, which did match my very limited and subjective experience.

What does a Clogged Filter Look Like ?

I decided to also try something fun with the microscope, and imaged a clogged and dried V60 Hario tabless paper filter:

We can clearly see that pores are stained with a brown color, perhaps caused by coffee oil, but we cannot see obvious fines blocking the entrance of a pore. That isn’t too surprising, as we might expect clogging to happen a little deeper than the filter surface.

I hope you enjoyed this post ! It is definitely the one that required by far the largest amount of work yet, but I think it was worth it.

Acknowledgements

I’d like to give a special thanks to Alex Levitt for sending me Cafec filters, and Scott Rao for giving me Chemex bleached filters, and for useful discussions without which I would not have thought about the possible importance of the uniformity index. I would also like to thank Doug Weber for useful comments.

How Coffee Varietals and Processing Affect Taste

I recently read James Hoffman’s fantastic book The World Atlas of Coffee and followed the also fantastic new Terroir course at the Barista Hustle web site. All of this reading motivated me to think a bit more about coffee varietals when I’m enjoying a cup of coffee. Previously, I had noticed some obvious taste differences between varietals, like the fact that typical Kenyans such as the SL28 varietal tend to have a nice taste of blackberries (or tomato when the roast is underdeveloped) but I did not think about it much further.

His book also made me realize that I couldn’t find much information about the typical taste profiles of different coffee varietals or processing methods, other than anecdotic facts and tasting notes of individual roast batches. Clearly, there is a ton of subjective tasting notes available out there, and I thought if we could only collate a big pile of them, I could probably distill it and see if some interesting trends come out of that.

I decided to contact Alex, a friend who built a really cool mobile application (for iOS and Android) called Firstbloom, where they actually did just that. They allow users to build their own personal library of various roaster’s bags and consult other people’s ratings. One really nice thing about it is that unavailable past offerings don’t disappear (some day, someone will need to explain to me why roasters always completely delete web pages of their past offerings, rather than just unlink them). Anyway, Alex was super happy to help me with this idea, and he generously sent me his metadata on 1,500 coffee bags with varietals, tasting notes and processing for every one of them ! Alex and his team built Firstbloom as a passion project (much like my blog), and I’m highly appreciative of their work and precious help with this idea. So, in a way, today’s blog post was sponsored by Firstbloom’s incredible efforts at collating these data, otherwise it would not have been possible.

Taste Descriptors by Coffee Varietal

The first thing I decided to investigate is the taste descriptors that come up most often for each coffee varietal. For this I only used coffee processed with the washing method, because it is the most abundant and I also think it is the process that will bring up varietal characteristics most clearly without influencing them (don’t tell Scott Rao, but there are some naturals that I love even if I think they distort the tasting profile). A very neat tool to visualize such data is a word cloud; each word is displayed with a size representative of how often it came up in a list. There are some Python packages that do basic word clouds, but I found out this website that offers way more options. Coding that from scratch seemed like an annoying enterprise, so I decided to just use it.

I did not just collate all of the taste descriptors and count the number of repetitions when I assigned weights to each word, the way one would typically build a word cloud. This would be an ok way to do things, but it would not necessarily amplify the differences from one varietal to the other. As you can see in that figure, there are some words that come up way more often than others when describing any kind of coffee:

These descriptors are not the most interesting to me, as they are the ones that come up most often regardless of varietal. What I would rather want to see are the specific descriptors that come out in one varietal more than in others. To do this, I counted the number of times a descriptor happened within a varietal, and normalize that to the amount of times it happened in any coffee, hence the descriptors in larger fonts above will be somewhat muted. In other words, if a taste descriptor happens a lot for SL28 and not that much for other varietals, it will be amplified more than a descriptor that happens a lot for SL28 as well as any other coffee. There is one potential drawback of doing this: Imagine there is just one bag of coffee ever that had the taste descriptor carrot. It would end up being extremely amplified in the word cloud of the one varietal where it happened, because it was never used for any other coffee. To mitigate that effect, I put a “ceiling” on the level of amplification that rare words can obtain; I decided that no word could be amplified by a factor larger than 3.3 because of its rare use in other coffee varietals.

Now, the fun part ! Here are some collections of taste descriptors for some of the most widespread coffee varietals:

This already jumped out as very representative of my experience. Ethiopian heirloom coffees often taste very floral and have a distinct citrus-like character often described as lime (see this great book review by James Hoffman where he talks a bit more about “heirlooms”). As I expected, SL28 is largely dominated by descriptors like blackberries or black currant. Just writing this makes me want to brew a good Kenyan cup. The Geisha varietal seems dominated by floral and fruity descriptives, my personal favorites (I’m so original). One thing that surprised me a bit more is how Caturra and Bourbon come out quite similar. But this is not actually that surprising, because Caturra arose from a naturally occurring mutation of the Bourbon varietal (as described at World Coffee Research).

There are some significant caveats I should add to these results. First, there are some taste descriptors that are caused by roasting more than varietal. I suspect that some varietals like Caturra are a bit harder to roast properly, and to diagnose once they are roasted compared to most Kenyan and Ethiopian coffees. If I’m right about this, then there will be some part of the unique characters of Caturra above that might be caused by a less optimal average roasting, and not by genes. For example, I suspect that some nutty descriptors might be part of that category.

Another likely bias comes from terroir, which might have a strong effect on taste; by terroir, I refer to the type of soil, weather, shade and other aspects of how farmers take care of their crops. Add this to the fact that some countries like Kenya often grow a very selective list of varietals (e.g. SL28, SL34, Batian and Ruiru 11), you will end up with a strong varietal versus terroir correlation. This means that some of the taste descriptors coming up in SL28 above could have more to do with terroir than actual coffee genes. In order to tell them apart, we would need a lot more data on typical Kenyan crops grown outside of Kenya. If we look at the word clouds of these four particular species next to one another, it might make you worry even more about this strong correlation:

These four varietals are also sometimes grown, roasted and sold as a blend , so the taste descriptors for the three species will also tend to be somewhat mixed together, even in the unlikely scenario where there was no effect from terroir.

Although these word clouds are biased by terroir and roasting, they are still super useful to me, because the bags of coffee that I’m gonna drink are also affected by the same biases. From a user perspective, it’s therefore really fun to know which varietals will typically get you in what kind of taste territories. I would however bet that in 10 years, a typical user experience might shift far from the word clouds above.

But even this more limited use of the word clouds above is not perfect, because there’s yet another effect that clearly taints these word clouds, and will make them a little bit less reliable as a guide to which coffee you want to buy: human bias. I found that roasters will very rarely write tomato on their bag of Kenyan coffee, even when it tastes like nothing else but tomato soup. This is not surprising, because tomato it is widely known as a roast defect form under-developed Kenyan coffee, so it would be a bit of a bad self publicity to write that on a bag of coffee. Therefore, there are some “surprise” taste descriptors that won’t end up in the word clouds above, but may end up in your cup of coffee !

Taste Descriptors by Coffee Processing

Another aspect that is widely known to affect the taste profile of a cup of coffee is the process by which the pulp is removed and the coffee beans are dried, generally referred to with the umbrella term processing. So, I decided to make similar charts, but this time grouping bags by processing rather than varietal. This is what came up for the two most dominant processing methods, washed versus natural:

Hahaha, that was just a joke ! Here’s what really came up from the actual data on natural-processed coffees:

I may be joking about it, but there are a lot of naturals that don’t actually taste dirty at all. I enjoy these “clean” naturals much more than the other ones, but that’s just my preference. For example, all natural coffees I ordered from Gardelli yet were very clean, and I loved them.

The same limitations that I mentioned above still apply here, plus a new one: some varietals tend to never be natural-processed (e.g. the typical Kenyan varietals) or vice-versa, and that will introduce some correlation between varietal and processing, further biasing the two word clouds above. I remember reading that the way “washed” coffees are processed in Kenya versus Colombia is also very different, so that’s yet another bias !

Speaking of human biases, here’s a really funny observation:

One of the top descriptors of honey-processed coffee is honey… hmmmm suuure, I’m very skeptical that this is not just tasters influenced by the actual process name. I would bet a full dollar that other descriptives in the sweet category might replace it if we did this blindly.

While I showed you the word clouds for the main categories, I generated a lot more of them. I will all gather them at the end of this post so that it doesn’t get too cluttered with figures !

The Flavor Wheel

So, that was super interesting to me, but there are more fun things we can do. For example, a few groups have defined flavor wheels for visualizing coffee flavors; those of the Specialty Coffee Association (SCA) and Counter Culture are probably the most well-known, but I thought there would be a way to arrange categories and the wheel itself in a way that would be more intuitive to me; here’s how I defined the first- and second-level categories.

I decided to split the wheel in two parts, where all the flavors generally seen as positive are on the top half, and those generally seen as less desirable effects of roast or green coffee are placed on the bottom half. And while we’re talking about halves, why not make it look like a coffee bean ? I used an elliptical coordinate system to make it look a bit more like a coffee bean. Once you get familiar with these figures, they can tell you a lot about the coffee just from a quick glance, and I love that. Here are the ones I generated for the main varietals; there are similar figures for 30 varietals and 14 coffee processing methods (with high-resolution vectorial PDF versions) which I made available to my Patreon supporters (Bourbon-tier and up):

As you can see, Geisha is the king of floral attributes ! It’s also interesting how Bourbon and Caturra often have nutty flavors typically associated with roasting. It makes me wonder whether it’s harder to make great roasts out of them, but I don’t know enough about roasting.

Ranking Specific Flavors

There is yet another way of visualizing these data that would be interesting; that one is an idea from Scott Rao. I selected a few taste descriptors that are often sought for by coffee drinkers, and ranked the different varietals and processes by how often they come up in their respective categories. This time I didn’t normalize the fractions by how often they come up in all coffees, because it won’t affect the order of rankings. I did however add error bars (for the math geeks, Poisson errors) to represent the small-number statistics; in other words, when a given varietal/process is represented with less bags, the true fraction of how often it’s described with one word will be more uncertain because we don’t have enough data to constrain it well.

Something interesting Matt Perger noticed in the “Sweet” figure is that smaller beans varietals tend to be on the sweeter side, which could be explained by the more even surface versus core roasting of small beans.

I hope you found this analysis as interesting as I did ! I’d like to thank Scott Rao, Matt Perger and Patrick Liu for useful thoughts and comments, as well as the developers of the Firstbloom app again !

Now, here are more figures I generated ! I made many more ranking figures, available to my Patreon supporters.

Varietal Word Clouds

Processing Word Clouds

If you loved the figures in this post, there are many more like them available to my Bourbon-tier Patreon supporters here, for 30 coffee varietals and 14 processing methods !

Why do Percolation and Immersion Coffee Taste so Different ?

The picture above sent by my friend Francisco Quijano is an awesome demonstration of how different a V60 (left) and Aeropress (right) brews of the same coffee may look like.

I’d like to talk about why coffee brewed by immersion (e.g. french press, Aeropress, siphon) tastes so much different, and even look so different, than coffee prepared by percolation (e.g. pour over or drip). Some of you may have noticed that this holds even when you compare them at similar extraction yields and concentrations.

In a previous post, I talked about a more general equation for extraction yield that should provide a better correlation with the chemical profile of a coffee cup, and therefore with its taste profile. Obviously, it doesn’t capture effects like changing coffee, roast curve, or even grind size. But there’s something else fundamental that the general equation cannot capture, because the taste profiles generated by immersion and percolation brews just live in different landscapes. Today I want to explore why that is.

The crucial difference between a percolation and an immersion is simple: a percolation extracts coffee with clean water, and an immersion extracts coffee with water that is gradually becoming more and more concentrated, because water sits in with the coffee grounds for the whole brew.

Because of this, the speed of extraction levels off more quickly in an immersion brew. This arises from the physics of diffusion; any solvent more concentrated in a specific chemical compound will have a much harder time extracting that same compound from the coffee grounds. This concept is described by the Noyes-Whitney equation:

You can read more about the different terms of this equation here, but basically this just tells you that the rate at which a compound gets extracted is higher when the solution is much less concentrated in it than the coffee particle.

So far, it would seem like this only explains why an immersion would extract slower, not why it would extract a different profile of chemical compounds. But there’s a catch: even if water is concentrated in a specific compound, it doesn’t prevent it from extracting other compounds efficiently. Therefore, if you wait long enough, an immersion brew will very closely reflect the chemical composition that was initially in the coffee bean, as each individual chemical compound comes to balance with the slurry. If you stop the brew before everything is extracted (which we usually do), the slowest-extracting compounds will be a little bit underrepresented, but otherwise the chemical composition of your cup will be a pretty good reflection of the chemical composition in the coffee bean.

In a percolation brew, things happen very differently. This is true because at every moment, the slurry water is replaced with cleaner water, therefore forcing the extraction speed to remain high as long as it’s not depleted from the coffee bean. As you might deduce, this means that the fast-extracting compounds will be over-represented in a percolation brew.

In other words, the chemical profile of a percolation brew will be very strongly correlated with their extraction speed, whereas an immersion brew will be instead strongly correlated with how abundant each chemical compound is in the coffee bean. It’s like listening to music with two different equalizers on.

I like explanations with words, but I like figures even more. We can explore the difference between percolation and immersion brews by simulating two different brews with a very simple toy model, based on solving the Noyes-Whitney equation numerically. In the percolation case, the slurry concentration term will always be forced to zero as we constantly replace the slurry with fresh water.

Let’s imagine we have a coffee bean with 30 different chemical compounds; and put them in a coffee bean with different abundances and different extraction speeds. I generated 30 such chemical compounds at random, and obtained this distribution: 

Each red circle here is one of 30 simulated chemical compounds. Those further to the right are present in larger quantities, and those further up are easier to extract.

Now, let’s solve the Noyes-Whitney equation for one of them. Here’s how the brew concentration goes up over time for the fastest-extracting compound:

This should be nothing surprising: the extraction speed levels off much earlier during the immersion brew, because the slurry water becomes too concentrated. In the percolation brew, the extraction is still happening for as long as the chemical isn’t depleted from the coffee particles.

Now, we want to compare these two beverages at the same extraction yield. To do this, I generated the extraction of all 30 compounds simultaneously, and stopped the brew when the average extraction yield reached 20.0%. I made the assumption that the chemical compounds that can be extracted from the bean amount for 28.0% of its mass. Unsurprisingly, the immersion brew took a bit more time to reach that average extraction yield.

Something really fun we can do with this simulation is look at the profile of chemicals in the final cup for the immersion versus percolation, and compare it with the chemical abundances in the coffee bean. This is what we get:

Each bar in this figure represents one of the 30 chemical compounds we generated randomly. I placed them in order of extraction speed; those further to the right extract faster.

One thing that immediately jumps is how the immersion brew (red) is much more similar to the internal coffee composition (black) compared to the percolation brew (blue). The only difference lies in the compounds that are slowest at extracting, as expected. If we let the immersion brew continue, this difference would become smaller and smaller, and eventually subside completely.

The percolation brew looks quite dramatically different from the internal composition of the coffee bean ! As you can see, those compounds that extract fast become completely over-represented compared to the internal coffee composition.

As it’s already quite clear from the figure above, an immersion brew composition correlates mostly with the abundance of chemicals inside the coffee bean:

This correlation is quite strong as you can see, with the exception of the 6 slowest compounds that are still out of balance because we stopped the brew before an average extraction yield of 28.0%.

Here’s another interesting observation: a percolation brew correlates strongly with how fast each chemical compound can extract:

As you can see above, the compounds further to the right are much more represented in the cup of percolation coffee, whereas they are not necessarily over-represented in the cup of immersion coffee.

All of these considerations only hold true because we stop the brew before the maximum theoretical extraction ceiling. If we were to extract everything from the coffee beans, then obviously the immersion and percolation brews would end up with the exact same chemical profiles, the brew times would just be different, and the concentrations would also be different if you used different quantities of brew water. To demonstrate this, I let the simulation run all the way to 28.0% and made a video of how the flavor profiles extract. You can see that, although the beverages converge to different concentrations, they both end up with the same profile than the coffee bean composition:

Obviously, there are other complicating factors that can make different types of brew even more different. For example, the presence of channeling in a percolation brew can bring out a lot more of the slow-extracting (usually astringent) compounds from a small fraction of the coffee particles, which as far as I know never happens in an immersion brew. But if you make sure to minimize channels in your coffee (I give some tricks on how to do that in my V60 recipe and pour over video), this won’t be a significant effect.

Another potential difference is suspended solids. Those are almost always filtered out by the bed of coffee itself in a percolation brew, whereas they will remain in your cup in simpler immersion brew methods like the french press. These compounds can have a strong effect on taste (usually muting it), even if they are not dissolved in water.

I’m sure some of you were surprised when I listed the siphon and Aeropress as examples of immersion brews at the start of this post. I know they are mixed methods, but in practice their tastes bear more resemblance to an immersion. I suspect that the reason for that is simply that most of the extraction happens during the initial immersion phase, not during the subsequent percolation phase. But surely, their chemical profile probably looks like some average of the percolation and immersion profiles.

You might think that this whole post is defeating the usefulness of the general extraction yield equation I mentioned before, but I don’t think it is. I will need a mass spectrometer to prove it, but I think that (1) for a fixed coffee and method, it will make the extraction yield measurement depend less on the amount of retained water; and (2) for different brew methods, it will make the comparison a little bit better, even if it’s never perfect. The comparison will certainly be made much better for two different methods in the same category, e.g. a Buchner percolation (without immersion phase) and a V60.

Before closing, I’d like to add a caveat to the analysis above; what I carried here is a simulation of random chemical compounds that don’t necessarily exist, just to demonstrate the concept of how extraction happens differently in immersion versus percolation. It is a dimension-less analysis (i.e. it does not involve any physical units), and therefore it does not indicate how significant these differences between percolation and immersion are. I do not know whether they are the cause for 1% or 50% of the taste difference between percolation and immersion (the rest of the difference would be colloids, fines, etc.), but my guess is that it is much less than 50%. One way to test this would be to perform blind test comparisons of a Hario immersion switch and V60 brews, but keeping all other variables constant will be a real challenge – just think of how the slurry temperature evolves during the brew in both cases; depending on the kettle temperature, constantly changing the brew water versus keeping the same water in an immersion will have a significant effect on the temperature profile, unless extreme caution and precise instruments are used !

I’d like to thank Francisco Quijano for sending me his awesome photo that serves as this blog post’s header, and Matt Perger for useful comments. I’d like to thank Aurelien He for proofreading comments.

The Repeatability of Manual V60 Pour Overs

Today I decided to measure how repeatable and consistent my manual V60 pour overs are. My expectations were very low, given how variable an average extraction yield I often get when I brew the same coffee a few days apart.

To do this, I used some older coffee I had left from a local roaster to prepare five V60 pour overs in a row. I started by preparing a gallon of water with the Rao/Perger water recipe described here so that I wouldn’t need to switch gallon, and to therefore mitigate any possible manipulation error when I prepare my brew water. The coffee beans I used are the Quintero Ignacio Colombian (a mix of Caturra, Typica and Tabi varietals) from Saint-Henri coffee roasters, roasted on February 25 2019, which I kept vacuum sealed in the freezer between then and the date of the experiment, May 26 2019. I took the beans out of the freezer about a week before the experiment, and opened the vac sealed bag right before brewing. Its roast profile is on the slightly dark side, where you get some hints of smoky flavors.

I used grind setting 7.0 at a 700 RPM motor speed on my Weber Workshops EG-1 grinder. It is zeroed so that burrs touch completely at 0.0, so 7.0 means that the burrs are spaced 350 microns apart. I used the plastic Hario V60 with the tabless Hario V60 bleached filters. I used the brew recipe that I described in this post and that follows Scott Rao’s method except for a few modifications. You can also find a video of this method here (pardon my poor filming skills, I will eventually make a better video).

I used a 22 grams dose and a total water weight as close as possible to 374 grams to achieve a 1:17 ratio. I prepared a nest shape with chopsticks as I described here. I tried to aim for 77 grams of bloom water; this is a bit higher than the 3:1 bloom ratio recommended Scott Rao, but I typically find it easier to quickly wet all grounds with that much water. I “rao-spun” the bloom quite heavily after pouring ~77 grams of water in, to ensure that all grounds are wet, and I used a chopstick to pop any bubbles that were forming. I did not use a spoon to stir the bloom. I used a 45 seconds bloom in all cases.

I pre-heated the kettle to 187°F while I was grinding the dose, pre-wetting the filter thoroughly (first with tap water and then with brew water), and preparing the coffee bed. I then boiled the water to 212°F right when I needed it, to avoid having minerals precipitate during a long boil (I’m not sure yet how important this effect is). I did not click my grinder, which causes it to retain 0.5 grams coffee instead of < 0.1 grams, but this also causes much less chaff and fines to be present in the dose because they preferentially stick to the grinder chute. I also used the Weber dose preparation shaker, which helps distribute fines uniformly throughout the coffee bed.

I tried to be as consistent as possible during my five brews – I think the hardest part is keeping a constant flow rate (the newer Acaia Model S scale may help with that because it apparently measures live flow rate, but I don’t have it), which resulted in slightly different brew times. I always initiated the second pour at 1:45, which helps discriminating which part I poured faster or slower when the times differ. I used the Brewista artisan gooseneck kettle which helps achieving a consistent flow rate, but it also means I had to press “quick boil” again every time I put the kettle back on its base (turns out I did not forget to do it during the five brews).

All brews had a very flat coffee bed at the end, and all were level except for the fourth brew which was very slightly slanted with the higher up side away from me (i.e. water drew down at the furthest point from me less than half a second before the closest point). When the surface of water passed that of the coffee bed and I could see light reflecting on the surface of the wet coffee bed, I noted the brew time, waited about 3 seconds and placed the V60 on top of a small recipient with the same aperture than the plastic V60 inner plastic ring. I gently swung the V60 up and down to collect 5-10 drops of coffee to determine the approximate concentration of interstitial liquid in the slurry at the end of the brew. This is useful to determine a more accurate average extraction yield that is more independent of the amount of retained water; for a detailed discussion on this, you can see this blog post and this one too.

I cleaned the VST refractometer lens with alcohol and re-zeroed it with distilled water, then measured the concentration of the last few drops and of the beverage using the recommendations of Scott Rao (also see this awesome guide by Mitch Hale). During this experiment, I realized that even if your refractometer measures a 0.00% concentration for distilled water, it is still very important to re-zero it; my TDS readings would otherwise be 0.10% too low because the weather is getting warmer in Montreal and I had not re-zeroed in more than a month ! You can find more details about this on my Instagram page.

Here’s how the five brews ended up comparing to each other:

Weight of bloom water:

  • Brew 1: 77 grams
  • Brew 2: 75 grams
  • Brew 3: 76 grams
  • Brew 4: 77 grams
  • Brew 5: 77 grams

Full span: 2 grams
Standard deviation: 0.9 ± 0.2 grams

Time where I reached 200 grams:

  • Brew 1: 1:07
  • Brew 2: 1:11
  • Brew 3: 1:11
  • Brew 4: 1:10
  • Brew 5: 1:09

Full span: 4 seconds
Standard deviation: 1.6 ± 0.3 seconds

Time where I reached total water weight:

  • Brew 1: 2:25
  • Brew 2: 2:23
  • Brew 3: 2:23
  • Brew 4: 2:19
  • Brew 5: 2:13

Full span: 12 seconds
Standard deviation: 5 ± 1 seconds

Total time at drawdown:

  • Brew 1: 3:04
  • Brew 2: 3:09
  • Brew 3: 3:05
  • Brew 4: 3:10
  • Brew 5: 3:08

Full span: 6 seconds
Standard deviation: 2.6 ± 0.4 seconds

Beverage weight:

  • Brew 1: 322.3 grams
  • Brew 2: 325.6 grams
  • Brew 3: 325.5 grams
  • Brew 4: 325.9 grams
  • Brew 5: 325.5 grams

Full span: 3.6 grams
Standard deviation: 1.5 ± 0.4 grams

Concentration of the last few drops:

  • Brew 1: 0.59%
  • Brew 2: 0.58%
  • Brew 3: 0.56%
  • Brew 4: 0.51%
  • Brew 5: 0.47%

Full span: 0.12%
Standard deviation: 0.051 ± 0.009%

Concentration of the beverage:

  • Brew 1: 1.42%
  • Brew 2: 1.41%
  • Brew 3: 1.42%
  • Brew 4: 1.43%
  • Brew 5: 1.43%

Full span: 0.02%
Standard deviation: 0.008 ± 0.002 %

Liquid retained ratio:

  • Brew 1: 2.6
  • Brew 2: 2.5
  • Brew 3: 2.5
  • Brew 4: 2.4
  • Brew 5: 2.5

Full span: 0.2
Standard deviation: 0.07 ± 0.02

Approximate “shareable” average extraction yield:

  • Brew 1: 20.8%
  • Brew 2: 20.9%
  • Brew 3: 21.0%
  • Brew 4: 21.2%
  • Brew 5: 21.2%

Full span: 0.4%
Standard deviation: 0.18 ± 0.03%

Precise average extraction yield (assuming f_abs = 1):

  • Brew 1: 21.6%
  • Brew 2: 21.6%
  • Brew 3: 21.7%
  • Brew 4: 21.8%
  • Brew 5: 21.7%

Full span: 0.2%
Standard deviation: 0.08 ± 0.02%

As you can see, my timings varied by some amount, but the effect on the concentration of total dissolved solids and average extraction yields were quite small. Another really interesting part is the fact that the approximate “shareable” average extraction yields varied by more than those calculated with the more exact formula. This may be explained by the fact that the more exact formula better compensates for different liquid retained ratios, likely caused by my having waited less or more before I removed the V60 from the coffee pot.

I honestly did not expect to reach a consistency of < 0.02%, close to the inherent precision of the VST refractometer (0.01%), but it seems that with enough concentration it is possible ! I do not think that I can reach this kind of accuracy first thing in the morning when I usually prepare my coffee. This experiment did teach me something important however: it is of utmost importance to be really careful in cleaning up the VST lens with alcohol, properly re-zero it with distilled water, and be patient while the sample reaches the lens and room temperature. Neglecting any of these steps can cause measurement errors much larger than 0.02% !