[Edit October 14, 2019:After having used the method described below for about a month, I decided to add a few steps to ensure that no liquid gets caught inside the pipette’s rubber cap. If you want to jump straight to the modifications, just use ctrl+F and search for October 14.]
Lately I have been a bit unsatisfied with how repeatable my measurements of total dissolved solids (TDS) concentration were with the VST refractometer. The instrument itself has a quoted precision of 0.01% TDS, but I found that the only way I could achieve such repeatability was to brew a few coffees in a row, let them thaw for about 40 minutes, then sample them carefully and measure them many times, as I did in one of my latest posts to assess my manual repeatability in brewing V60s. Lacking a good methodology to reach 0.01% TDS repeatability on every morning brew has held me back from properly characterizing the effects of V60 filters, grind temperature and a few other things.
There are two problems that will hold you back from measuring TDS with a 0.01% precision: evaporation and a sample not at room temperature. Today I want to present you with a method that I recently designed through trial and (lots of) error, which allowed me to reach that 0.01% TDS precision I was hoping for. I will just start with the answer first, and describe what method worked for me, but I’ll include more data to back up why it works, and some other things I tried for the more curious readers.
This method is intended for filter coffee, therefore it does not include the use of syringe filters. For that I would recommend using Mitch Hale’s guide on refractometer usage, but it would probably be wise to add a step where you cool down your sample with a larger, thermally conductive pipette (I’ll talk more about the details below). A larger pipette would be needed, because the syringe filter and plastic syringe both need to be “rinsed” with some coffee first to get a good measurement.
The equipment you will need is the following:
A set of glass pipettes. You will need only one pipette but two of the red rubber caps that come with them. Metal pipettes would also work but they are very hard to find.
Two small elastics
Access to a tap water faucet and a dry towel near your brew station.
Small microfiber cloths for refractometer cleanup (facultative). You can also use tissues.
Preparing the Equipment
Place the two elastics around the pipette as shown below; you can leave them there between uses:
Note that the elastic close to the outer end of the pipette is not exactly at the tip, so that you can avoid dipping the elastic in your coffee. Both elastics serve to prevent water from sticking to the bottom of the pipette surface during rinsing and travel all the way to either end of the pipette. You may also notice that I mounted the rubber cap on the “wrong” end of the pipette. I actually prefer it this way because the larger opening on this end creates less froth when putting a sample of coffee on the refractometer lens.
The Detailed Steps
(1) Put distilled water on the lens
Use a generous amount of distilled water to make sure the lens is at room temperature.
I recommend using an eye dropper bottle to keep a small amount of distilled water handy.
Ideally wait a minute or more so that the temperature of the lens is at equilibrium with the distilled water; I do this step before I start brewing to save time.
Close the lid to avoid contamination.
You can blow gently on the surface of the distilled water if you don’t have time to wait a few minutes.
(2) Sample your coffee with the glass pipette.
Don’t plunge the elastic in.
Stir thoroughly with a spoon as you’re sampling (I don’t think swirling creates enough vertical mixing).
Season the pipette by sampling, dropping back in the coffee pot (or elsewhere) then sampling again.
[Edit October 14, 2019:At this step I highly recommend wiping the tip of the pipette dry with a clean, dry cloth. I also recommend you verify the loose rubber cap inside is dry by pressing it against your wrist or finger, while blocking most of the spout. If there’s water or coffee in it, you should feel or hear it hissing, and in this case you should rinse it and place a dry tissue inside it for a few hours. I keep a couple spare rubber caps around in case that happens.]
(3) Rinse the pipette for 30 seconds.
First put a rubber cap on the pipette spout to avoid diluting your sample.
Open a gentle stream of cool tap water. Use the photo further below as a reference.
Hold the pipette by the two rubber caps, with the spout end a little bit higher.
Place the pipette close to the top of the water stream to avoid any water getting on either end.
Gently move left and right and rotate the pipette.
Do not rinse for longer than 30 seconds or you might make your sample too cold.
(4) Dry the pipette thoroughly with a towel.
(5) Leave the pipette to rest for at least a minute, ideally for 2-5 minutes.
Waiting allows the sample to reach room temperature even if the tap water made it a bit too cool.
I usually take this time to taste the coffee and note down my impressions without being influenced by the measured TDS, and then to zero the refractometer.
(6) Zero the refractometer.
You can optionally measure the distilled water temperature with a bead K-type thermoprobe right *after* zeroing if you want to test how well you are applying the method.
Do not rezero after putting the probe in because you might have contaminated the distilled water. Dry the probe thoroughly.
I do not know whether the VST refractometer keeps its zero when it automatically shuts down.
(7) Dry the refractometer lens.
You can very gently blow on the cleaned surface to favor evaporation.
You can look at an angle to see light reflecting on the lens; this makes it easier to see if any water droplets remain (see the photo below).
Use a clean and ideally absorbing cloth. A tissue can work. Be careful not to scratch the lens if you’re using Atago; VST has a sapphire lens so you won’t scratch that with conventional stuff.
I use cloths and keep one for water or alcohol only, then I dry it between uses. I use a different one (or a tissue if I’m out of them) for wiping out coffee.
(8) Place 3-5 drops of cooled coffee on the lens and immediately close the lid.
I recommend discarding 1-2 drops before putting a sample on the lens. This will help ensure that your sample is not contaminated with tap water.
I recommend placing the drops around the lens on the metal ring, to stabilize temperature as much as possible before the sample touches the lens.
Scott Rao recommends 3 drops only to avoid shifting the lens temperature too much, but here we already cooled down the sample so it is not as crucial. I still recommend using as little drops as you can as long as you obtain a small pooling of the sample on the lens, but in my experience up to 5 drops can be needed because cool liquid is more viscous than warm liquid.
Evaporation is not an issue because the sample is cool; it happens extremely slowly in the absence of airflow.
Do not blow air on the coffee sample.
(9) Wait at least 30 seconds and not more than 5 minutes.
Don’t wait any longer because sedimentation or evaporation *could* become issues.
(10) Measure your TDS.
Take several measurements. It is possible that your TDS will slowly go up if your sample is too warm, or down if it’s too cold. Ideally it shouldn’t shift by more than 0.03% TDS with this method.
If your readings shift by more than ~0.03% TDS there is a risk that the lens temperature converged to a higher temperature than what you zeroed it at (causing a lower TDS reading), and if your sample was too warm there is a risk of evaporation (causing a higher TDS reading).
If you want to verify that your sample temperature is the same as your distilled water when you zeroed, do this right after measuring a converged TDS with a bead K-type thermoprobe. Clean it with a drop of alcohol after doing so.
(11) Clean up the equipment.
Wipe out the coffee sample; I use a clean cloth for this and put it in the laundry after 1 use, otherwise it can accumulate coffee oils. Tissues can work for this.
Add a few drops of alcohol then wipe again.
Clean up the pipettes by sampling hot water and leaving them to dry; you can also occasionally sample alcohol to do a deeper clean.
Keep the refractometer lid closed to avoid having anything contaminate the lens.
If you put hot coffee on the lens, I recommend cleaning it up and then putting as much distilled water as in the photo further above for a minute, wipe it and repeat this three times. This will allow the lens to reach back room temperature again. Once you have done this, you can start over (and don’t skip the re-zero step).
[Edit October 14, 2019:I recommend leaving the loose rubber cap downward so that any liquid inside of it would slowly drip out over time. I usually place it leaning against the wall to do this.]
The Effect of Temperature on TDS Measurements
At this point you may be asking: “Ok, but how important is it really to have the right sample temperature ?” The answer is: REALLY important ! To illustrate this, I purposely put a warm sample on my refractometer lens and measured its TDS several times as it cooled down.
The room temperature was 75.7°F, and that was also the temperature of the distilled water I used to zero the refractometer. The correct TDS for this beverage was 1.45%; as soon as the sample temperature departed by 0.2°F or more, you can see differences of 0.01% TDS or more ! Only a small 5°F difference can cause you to underestimate your concentration by as much as 0.08% TDS ! The take away for me is that I should try to always measure my sample within 0.2°F of the temperature at which I zeroed the refractometer with distilled water, otherwise my accuracy will be worse than 0.01% TDS.
The Effect of Evaporation on TDS Measurements
To investigate the impact of evaporation on our TDS measurements, I tried two different case scenarios that resemble what I usually did when I cooled a sample of coffee with a ceramic ramekin. I put a small sample of water at 160°F on a milligram-precision scale and noted how fast the weight went down as water evaporated. In the first case, I placed 10.210g of water in the small plastic container that comes with the scale. In the second case, I placed a smaller 4.211g sample in a tiny stainless steel cup with a similar opening surface that I found in my plumbing stuff; in the second scenario, the sample will cool down much faster because it has less mass and is less insulated, so that will show us the effect of how fast the sample temperature drops on its rate of evaporation Here’s what I found:
It seems like the different cooling rates drastically affected the evaporation rate. The smaller sample placed in the metal cup, which cooled much faster, suffered much less evaporation. On top of that, the evaporation rates of both samples kind of leveled off as they were cooling.
But the more interesting question is how that would affect our TDS readings. If we simulate a 5 grams sample that we would be putting apart for cooling down, here’s how TDS would creep up with time with these two evaporation rates:
We can see that in the slower cool down case, TDS went up by a bit more than 0.01%, which is not great. Therefore, we want a strategy that allows us to quickly cool down a sample, ideally in an enclosed container with very little surface exposed to the air. This is why I think a pipette is a great place to cool down our sample !
Some More Data to Back up this Method
To verify how efficient this method is, I brought some tap water to 160°F, typical of my warmest coffee temperatures immediately after brewing (I use a Hario glass server; although mine is smaller with a 400 mL capacity), sampled it with a glass pipette while imitating the seasoning step, and stuck my Bluetooth K-type bead thermoprobe in the pipette then rinsed it with tap water. Here’s what I obtained:
I highlighted the moments where tap water was touching the pipette; as you can see I got distracted and missed it for about two seconds in the middle. Removing that small gap, I needed exactly 29 seconds to reach room temperature.
What’s also neat about this is that the result is not too sensitive on starting temperature, because the sample cools faster the larger the temperature difference is (that’s basic thermodynamics).
Let’s compare that with my previous method, where I left my sample cool in a ceramic ramekin:
Not only the ramekin causes evaporation, but it didn’t even get close to room temperature even after 8 minutes !
For a few days now I have been measuring the temperature of my distilled water sample immediately after I zeroed, and that of the coffee sample immediately after I measure TDS, to ensure that the method above allowed to reach similar temperatures. Here are the results I obtained:
Brew 1: zeroed at 72.5°F, measured at 72.6°F.
Brew 2: zeroed at 76.3°F, measured at 76.6°F.
Brew 3: zeroed at 75.6°F, measured at 75.5°F.
Brew 4: zeroed at 74.4°F, measured at 74.2°F.
Brew 5: zeroed at 75.1°F, measured at 75.1°F.
Brew 6: zeroed at 75.5°F, measured at 75.5°F.
Brew 7: zeroed at 74.9°F, measured at 75.2°F.
Brew 8: zeroed at 74.3°F, measured at 74.2°F.
As you can see, the method worked quite well ! Despite my morning zombiness, I had an average difference of 0.14°F and a maximum difference of 0.3°F, which is just enough to affect TDS by 0.01% !
Failed Attempts and Other Methods
As I mentioned above, there are a few more things I tried that kind of failed, or weren’t really useful.
I initially wanted to use a metal pipette rather than one made of glass because metal is more thermally conductive. I totally failed to find any metal pipettes that are not crazily expensive, so I changed my focus and looked into metal syringes and turkey basters instead. So, I went ahead and ordered the following horror on Amazon:
It did kind of work, but made a huge mess especially because the piston is gigantic; you could probably syringe up half a V60 brew with it. Having such little liquid in a potentially large encasing worried me that evaporation could be an issue, but the even more annoying part is that the piston doesn’t hold coffee very well if you release it, so you get a lot of spillage.
But then I realized that the syringe needle was about exactly how I wish a metal pipette would be, and that the rubber caps that came with my glass pipettes actually fit perfectly on it ! So there you have it, a perfectly fine metal pipette:
After playing a bit with it, I think it did a perfectly fine job, but it actually doesn’t make it that much faster to cool down the sample, and you lose something that I realized I like a lot: seeing the coffee sample inside the pipette. The glass pipette made it much easier to make sure I hit the sample with tap water, that no tap water entered the pipette, and that no condensation was forming inside the pipette.
In the figure above, you can see that the metal syringe also took about 30 seconds of tap water to reach room temperature. This is not an improvement over the glass pipette, so I decided to stick with glass. This may indicate that the flow of water, not the thermal conductivity of the pipette, is what determines the efficiency of this cooling method in this range of materials. Rest assured that using a plastic pipette would shift that balance and make the method way slower, because plastic is an excellent heat insulator.
The Patience Option
I also decided to measure how long it would take to just let each pipette reach room temperature without taking any action, for the most patient among us. The results surprised me at first:
The glass pipette was faster ! I think this is due to the glass having more thermal mass, so it initially takes up a lot of heat from the sample very fast before the full pipette+coffee system need to cool through very slow air conduction. For the metal pipette, the whole thing must happen via air conduction because metal has a very small thermal mass.
If you’d like to do this with samples of about half a mL, the glass pipette took 12 minutes 16 seconds to be only 1°F warmer than room temperature, where the metal one took 17 minutes 38 seconds. I’m giving you the time for an acceptable 1°F difference because actually reaching room temperature takes a very long time (this is an asymptotic process).
The Aluminum Monster
There are a few more things that I tried which involved the freezer. I wrapped the glass pipette in aluminum foil to create an aluminum sleeve and wrapped the outer parts around a few scotch rocks to add thermal mass, and put that device (without the pipette) in the freezer.
You really don’t want to take a pipette out of the freezer because water from surrounding air will condensate everywhere on it, including inside it, which would contaminate your sample.
I tried taking out the aluminum “monster” from the freezer and inserting the glass pipette in it, and here’s what I got:
As you can see, there’s a huge risk of overshooting that is involved, in addition to the method being less practical because you need to build an aluminum monstrosity and let it cool in the freezer between every cup of coffee. I tried varying the amount of scotch rocks (down to zero), but it was still prone to overshooting, and generally much slower than tap water at doing its job.
The Faucet Cooling System
The next thing I tried was the over-zealous, “hide it from your friends” cooling system: I wrapped aluminum outside a glass pipette and connected both ends to 1/4″ rubber tubing, then wrapped the same part of aluminum also around my usual glass pipette. This created a sleeve where I could put my glass pipette in thermal contact with another glass pipette that is part of the rubber tubing:
I then cut the corner of a small 4×6″ vac sealable bag, wrapped a rubberband around the tubing and fixed it inside the vac sealed bag, to create a flexible inlet for the tubing. I wrapped the vac sealing bag around the faucet and held it tight in place with one hand and turned on the tap water to get water running through the system, then out in the sink.
This actually didn’t even spill or explode, which still surprises me. I had to open the faucet gently, but the problem with this system (besides you looking insane) is that the thermal contact between the two pipettes is not great, so it takes a lot more time to cool down the sample, about 10 minutes !
I hope you enjoyed reading this, I certainly had fun messing up my kitchen !
Lately I received the kettle Brewcoat that I ordered a few weeks back; I previously didn’t dare order one because they don’t make any for my Brewista artisan gooseneck kettle and they’re not cheap. Thanks to your support, I decided it was worth trying it and it would provide us with a “worst case scenario” of how a loosely fitting brewcoat improves kettle temperature stability. I went with the “Black felt/Black Polar Composite” version; I picked the Bonavita 1.0L kettle model because the size is very similar to the Brewista, and I was delighted that it fits quite nicely by adding just two pins:
The back of the kettle is the part where the fit is worst, because the Bonavita has its handle connected to the bottom of the kettle where the Brewista doesn’t:
I know, I scratched my kettle a bit 😢
If you are a Patreon backer you might also know that a while back I had ordered a small sheet of aerogel, which is one of the most insulating materials that are known. I decided to use it to add an insert to the Brewcoat for even more crazy insulation:
If you look carefully on the image above, you can see the additional layer of aerogel under the Brewcoat.
To investigate how each layer affected the kettle stability, I used cool Montreal tap water to rinse the kettle thoroughly and bring it down to the tap water temperature, then placed it on its turned off base with the Aerogel + Brewcoats with exactly 600.0g of cool tap water (this is the quoted capacity of the Brewista). The ambient temperature was 22°C (72°F) during the whole experiment. I made sure that the kettle lid was closed correctly and inserted my bead temperature probe in the vent holes of the lid all the way in to make sure the probe touched the bottom of the water.
I logged the temperature curve with the Thermoprobe BlueTherm One Bluetooth device (which is NIST calibrated to a precision of 0.7°F; a purchase made possible by the support of my Patreon backers !), turned on the kettle base and immediately pressed the “Quick Boil” button. When the temperature hit 212°F on the probe, I turned off the kettle base completely and waited for the probe to cool down to at 192°F or lower. Once that was achieved, I exported the temperature curve as CSV to build the figures and comparisons below.
I then repeated the exact same experiment with the Brewcoat only, and then using the kettle without any insulation. Between each experiment I thoroughly rinsed the kettle, temperature probe and kettle lid with cool tap water to bring its temperature down, then threw away the water and filled it again with cool tap water. Here are the resulting temperature curves, after I stitched their time axis to remove small delays from my manual inconsistencies:
You can immediately see that the “no insulation” case is much worse than the others ! In particular, it is very hard to keep the non-insulated kettle above 200°F, which is very consistent with my experience, as I constantly need to press the “quick boil” button between every pour during my V60 brews.
Adding the Brewcoat layer immediately makes things a lot better; there is no obvious gain in the time required to reach boiling temperature faster, but once this point is reached the cool down rate is massively reduced ! Even if you are using a kettle that doesn’t require you to constantly press “boil” every time you pick it up from the base, I suspect it will still have a hard time remaining close to 212°F unless the kettle is insulated with more than a thin layer of metal, because according to the purple curve above that will require a constant and significant energy input; this is also not very eco-friendly.
Another point that immediately becomes clear with the figure above is that adding a layer of aerogel provided significantly diminished returns, and would only be worthwhile if I planned to leave the kettle off for much more than 15 minutes. Maybe adding this aerogel layer could provide small energy savings over just the brewcoat in a coffee shop environment, but I doubt they would be significant.
I also built a small table to compare difference performance metrics of the three cases I experimented with:
In this table, I calculated the median upward and downward slopes in the heat and cool down phases for each case. As we saw before, the heat rate isn’t significantly faster with additional insulation, but the cool down rates are significantly different with either types of insulation.
I also included how much time is needed after the kettle is left turned off from boiling point to reach 200°F and 192°F. The non-insulated kettle fell to 200°F in only 34 seconds ! This is not great, to say the least. The next few lines indicate how many degrees are lost for a 10 seconds, 30 seconds and 1 minute waits after the boiling point is reached if the base doesn’t immediately power the kettle back (e.g., if you forget to press “quick boil” again on the Brewista kettle). This also definitely applies for the duration of your pours, because no kettle can keep receiving energy as it’s off the base ! If you pour for a duration of 30 seconds, a non-insulated kettle will already have lost a whopping 10°F – that really surprised me, and it makes me wonder why V60 kettles don’t already come with an additional layer of insulation !
One other thing that I noticed during this experiment is that the Brewista temperature indicator is not always reliable. Without any insulation, the true temperature is almost always 20°F cooler than what the kettle base indicates, and the base indicates 212°F when the true temperature is at barely 196°F. If you wait enough to hear the water vapor hissing out of the vents however, then the true temperature is in the range 211-212°F, but that happened only about 15 seconds after the base indicated 212°F for me, so pay attention to sound not just the temperature reading of your kettle.
When I used only the Brewcoat, then the kettle base temperature was reliable within a degree, until I got past 200°F, where it got gradually worse; it indicated a temperature about 2°F higher than reality by 202°F, and the difference increased up to 6°F when the base indicated 212°F as the true water temperature was only 206°F. However, I only had to wait a few more seconds for the true water temperature to hit a stable 212°F as the water vapor started hissing through the vents. As far as I could tell, the case with a Brewcoat plus an aerogel layer was very similar.
I hope you found this as interesting as I did; it turns out we should worry about kettle insulation if we want to achieve the highest possible slurry temperatures in our V60s ! I will be gathering some more slurry temperature curves in the upcoming weeks, and I fully expect to see an increase of at least a few degrees, which is great because we are still much under the 205°F threshold where, in my syphon tests, brews started to taste worse.
In this post, I discuss the composition of water that we use to brew coffee. If you are new to these discussions, I strongly recommend that you first read this previous post about brew water.
Barista Hustle recently released a very clever Excel calculator to determine the amount of mineral concentrates needed to craft brew water recipes starting from soft tap water instead of distilled water.
I thought this was a great idea, and decided to make a similar tool for those like me who prefer to use a single concentrate. While I was at it, I made it in a way that allows you to use more minerals on top of epsom salt and baking soda, which allows to control the concentration of magnesium, calcium, sodium, sulfate and bicarbonate ions individually, instead of just hardness and total alkalinity.
We are only beginning to understand the effects of magnesium, calcium and sodium ions on how fast different chemical compounds extract; this is discussed a little in the Barista Hustle water course, but so far I have not seen much more information about this elsewhere. However, I have never seen much discussion on the effects of sulfate ions on the resulting coffee taste or composition. This new water crafting tool could allow us to experiment with it while keeping everything else fixed.
The uses of this tool go even further; if you have reverse osmosis water with non-zero mineral composition, you can adapt your concentrate to still get the proper brew water composition. You could even use it to craft custom brew water starting from soft water bottles.
One thing this tool won’t allow you to do is create brew water that is lower than your tap, reverse osmosis, or water bottles in either total alkalinity, hardness, or individual ionic concentrations. This is because doing so by adding minerals is just not possible (maybe that would be possible with reactive compounds, but let’s not go there).
The tool I built is a Google Sheet; if you are new to Google Sheets, keep in mind that you won’t be able to modify it before you create your own copy, with File/Make a Copy. Asking me for edit permissions won’t work; doing so would modify the sheet for every other user. You can find the tool here.
Make sure to read the header instructions of the sheet. There are a few different versions of the calculator for those who don’t have access to all minerals.
I thought this was also a good moment to release publicly two of my previous Patreon-only videos related to brew water. In the first one, I filmed myself crafting a single batch of Rao/Perger brew water concentrate; you can find more explanations about the required material here;
In this second video, I use the concentrate to prepare a 4L container of brew water, starting with distilled water; you can find some more information about it here;
[Edit Nov 27, 2019: David Seng just let me know that he also built a water crafter page on his website. You should definitely check it out, as it seems very helpful and even includes the Langelier saturation index for scale and corrosion !].
I’m hoping this will help you brew better coffee, hopefully with a little less hassle for those of you lucky enough to live in a soft water area !
A while ago, I decided to purchase a relatively cheap USB microscope to see what V60 filters look like. This is one of the first images I took of a Hario tabbed paper filter:
I was really pleased that the microscope had enough resolution to see the filter pores ! This opened up the exciting possibility of characterizing the pores of coffee filters, and determine which ones are optimal for pour over brews. One thing that became immediately apparent is that the pores are not circular, and they don’t seem produced by a perforation of the paper membrane, instead they just seem to naturally occur from spacings between piles of paper fibers.
When I saw that nice image, I immediately grabbed a Hario tabless paper filter and took another image:
As you can see, this one is less immediately interesting, we can barely see the pores ! After being a bit bummed out about this, I realized it was simply caused by the tabless filters being quite thicker, which minimizes the contrast of the microscope’s LED light bouncing off the filter surface. Fortunately, it’s possible to fix this with a bit of image analysis. To do this, I wrote a code that re-adjusts the contrast of the image so that its pores become more apparent:
By that point, I realized that a proper filter analysis was indeed possible with this microscope, and things started to get really fun. I gathered this list of filters from various manufacturers:
Now, before we start discussing the actual analysis, I’d like to show you what each of them look like under the microscope.
Hario Tabbed Bleached Paper Filters for V60
Hario Tabless Bleached Paper Filters for V60
Hario Tabbed Unbleached Paper Filters for V60
Cafec Bleached Paper Filters for V60
“Coffee Sock” Cloth Filters for V60
Aeropress Bleached Paper Filters
Aesir Bleached Paper Filters
Chemex Unbleached Paper Filters
Chemex Bleached Paper Filters
Osaka Metal Filter for Chemex and V60
Hario Unbleached Paper Filters for Siphon
Hario Cloth Filters for Siphon
Calibration of Image Scale
Before these images can be used in a more quantitative analysis, the size of each pixel must first be determined. To achieve this, the microscope comes with a small calibration plastic that looks like this:
As you can see, there are many options that can be chosen from. I highly suspect that the printing standards for this calibration unit are not particularly great, so I decided to choose the grid in the middle of the calibration plastic; I chose it because it provides many measurements of the scale at once, and it seems much easier for the manufacturer to get the spacing between printed lines right rather than the thickness of a line. I took seven images of this grid at slightly different positions. These images each look like this:
These lines are marked as 0.1 mm (100 micron) wide. You can already see from the image that the line spacings are not perfectly uniform. There are also small defects on the image caused by imperfections in the plastic. I chose to take the median value of each row (a vertical median) to create a 1-dimensional signal of this grid, which as you can expect looks like an up-and-down pattern (dark pixels where a line falls, white pixels otherwise). I then used what is called an auto-correlation of that signal to determine by how much it can be shifted before lines overlap with each other. I did this on the seven images that I took; I then took the average pixel scale as my best measurement, and the standard deviation as the statistical uncertainty in my measurement. This measurement error does not include any systematics. For example, if the manufacturer actually printed a pattern of lines averaging 110-micron wide spacing, that 10 micron systematic error won’t be included in my error estimate. Because I have no way to know about such systematics, I just ignored them.
I also repeated a similar analysis with a vertical median instead of a horizontal one, to check that the pixel sizes are the same in the vertical and horizontal directions. Here’s what I found:
Horizontal scale: 67.59 ± 0.09 micron per pixel
Vertical scale: 67.54 ± 0.07 micron per pixel
As you can see, the two values agree within the error bars, which is encouraging. Therefore I assumed that the scaling is the same in both directions, and combined them together to obtain a final image scale estimation:
Combined scale: 67.56 ± 0.08 micron per pixel
Analysis of Pore Distributions
Now it’s time to get even deeper in the technical details. As I mentioned, one of the more useful things to do with these microscope images is to determine the uniformity and quantity of pores in each filter. To do this, I opted to do some image smoothing with various bandpass sizes.
The unbleached paper filters I analyzed are brown rather than white. Because I don’t want color to affect my results or make it harder to bring out the contrast between the filter surface and its pores, I experimented visually and determined that adding up 100% of the red channel and 50% of the green channel was a good way to mitigate the effect of brown color on the detection of filter pores. I used none of the blue channel, because brown is a color that contains very little blue in it, and this means that the undesirable brown-white variations in color across the surface of an unbleached filter are maximized in the blue channel.
Here’s what an original color image of a Hario unbleached filter looks like:
If we look only at the (contrast-scaled) blue channel, variations in brown shade will be very obvious:
If instead we looked at the combined R+G+B channels, these variations would get diluted a bit:
But taking the red channel plus half of the green channel gets us something that removes these variations even more:
As I mentioned before, an important step is to re-normalize the image contrast in order to see the pores clearly regardless of filter thickness. In astronomy, I need to do this all the time and by experience one efficient way to do it that is robust against outlier pixels is to subtract the 0.5th percentile of the image everywhere (i.e., subtract almost the smallest image value), then divide the image by its 88th percentile (i.e., divide by almost the largest image value). I then set any outlier pixels darker than zero to exactly zero, and any outlier pixels brighter than 1.0 to exactly 1.0.
Here’s what the image above would look like before applying such a contrast normalization:
The pores are much harder to see in the image above, compared to this one where the contrast was normalized:
There is another neat trick that can be used to remove large-scale variations across the image very efficiently, as long as they are larger in scale than the largest possible pores. Basically, you divide the original image by a smoothed version of itself, and this brings out only the small-scale variations across the image. I used a Butterworth filter to do this; it uses a slightly different bandpass to smooth the image compared to the more typical Gaussian smoothing, but I found that it was better at preserving the exact pore shapes. In all cases I removed only the 10% largest spatial frequencies in all images with this step.
Here’s how the Butterworth filtering affects the image above:
As you can see, this removed a lot of the variations caused by creping or shadows.
Another step I took is to blow out the image resolution by a factor 20 using an interpolation algorithm. This allows me to measure pore sizes at the sub-pixel level, and obtain smoother pore size distributions with more data points in them. The next step to detect filter pores is to choose a threshold to separate a pore from the filter surface. I used a threshold of 0.5, which means that any pixel darker than half of the image scaling is considered a pore. You can see visually what this results in, with all detected pores marked in red:
At that point, I simply counted the fraction of pixels that were marked as pores in this image.
if you are not interested in the details of how I coded the construction of pore size distributions, you can skip the next paragraph, and the equation !
To do it, I used the magic of infinite numbers in coding. In some coding languages, those are called “Not a Number” (NaN), and they can either be your worst enemies because they crash all of your software, or your best friends because you always keep them in mind and ensure your codes don’t crash when they are encountered. Believe me, they should be your friends, because they open up a lot of nice coding tricks. One of these tricks is the following: You can create a mask image that has a value of 0.0 at every pixel corresponding to the filter surface, and NaN at every pixel where there is a pore. You can then use some fast and well-vetted box-smoothing algorithms to look at the larger scales in the image, and this will cause the filter surface to slowly creep inward and close down the detected pores.
Do this with many different smoothing box sizes (let’s call such a box size x), and you will gain information on the fraction of filter pores at every size ! Another neat trick about the dynamics of how NaN values creep inward is that they will give you a list of pixel locations where square particles of a maximum radius of exactly x can pass through the pores; normal smoothing algorithms would underestimate what size of particles can pass because they would blur the edges of filter pores. If you count the fraction of masked pixels (let’s call that m) for every box smoothing size (recall that we named this x), it can be demonstrated mathematically (I will spare you the details) that the distribution of pore radii f(x) is related to the second derivative of the masked fraction versus smoothing box size:
where p is the pixel scale (in pixels per micron).
Basically, how much the fraction of masked pixels changes as you are smoothing the image gives you an indication of how much pore surface is being closed down.
I found this algorithm efficient to quickly measure pore sizes regardless of their shapes across the image, and measuring m(x) is basically asking “If you take one squared particle of radius x, what is the fraction of surface positions where it could pass through a filter pore ?“
These calculations resulted in a pore size distribution for each microscope image that I obtained. I then combined the distributions from every image of a given filter to an average pore size distribution for that type of filter. I displayed pore diameters rather than radii, because I suspect this is what most people will assume if they hear “pore size”. Here’s an example of what I obtained with the Hario tabbed paper filters:
As you can see, the peak of the distribution in terms of number of pores seems located below the spatial resolution of the microscope, but we will see later that this is not an issue given that we are interested in how the pore distribution affects flow rate, and we will see that the pores smaller than 10 micron have an insignificant contribution to flow for all the filters that I tested.
Here’s how the distribution of each filter compared:
As you can see, the Osaka metal filter has way more pores than the other filters. I find it more interesting to compare the normalized pore distributions, and to group them by brew method:
As you can see from the distributions above, paper filters tend to have more uniform distributions in pore sizes (the slopes of the distributions are steeper). One thing I found really interesting is that all unbleached filters seem even more uniform. This hints that the bleaching process may be affecting the pore distributions of filters, possibly in a way that will hurt brew quality, but we’ll come back to this later.
The units of the distributions above can seem a bit confusing, as they are in number of pores per micron per millimeter squared. The “per micron” part is caused by these distributions being probability densities, i.e., you need to integrate their area under the curve to obtain a real number of pores, which will remove the “per micron” unit. The “per millimeter squared” is just the surface of the filter. If you integrate all of these distributions across all possible pore sizes, for example, you could count how many pores per millimeter each filter type has. With a slightly different operation, you can also calculate the fraction of each filter’s surface that consists of pores (I removed the metal filter to get a clearer figure):
It is obvious from manipulating all the filters above that they have very different thicknesses. This is an important property of filters because it will affect their flow rates. I thus ordered a digital Caliper with a 20 micron precision to actually measure the thickness of every filter. Precisely measuring the thickness of a paper filter is actually not as straightforward as you might think; if you close the Caliper too hard, the filter will get compressed and potentially damaged, and you won’t measure a realistic thickness in the context of water flowing through the filter.
To overcome this problem, I gently closed the Caliper on each filter to obtain a more realistic thickness, but this brings up a whole new problem of measurement reliability. Fortunately, I can easily repeat these measurements many times on different filter locations, and different filters, so I kept taking measurements until my error on the average thickness became much smaller than the quoted 20 micron precision of the Caliper. Stats geeks will know that this error on the average can be calculated with the standard deviation of all values divided by the square root of the number of values.
I ended up taking a total of over 700 thickness measurements (across all filter types) before I was confident in my results. Here’s the list of filter thicknesses what I obtained:
Chemex unbleached: 167 ± 23 μm
Chemex bleached: 210 ± 22 μm
Hario unbleached: 203 ± 21 μm
Hario tabbed: 206 ± 21 μm
Cafec: 207 ± 21 μm
Hario tabless: 242 ± 22 μm
V60 cloth: 690 ± 22 μm
Aeropress: 120 ± 22 μm
Whatman: 170 ± 22 μm
Aesir: 220 ± 22 μm
Siphon paper: 220 ± 22 μm
Siphon cloth: 645 ± 22 μm
And here’s the same data, displayed as a figure:
Another important point about filter properties is how fast water flows through them on average. This is affected by factors like pore size distribution, filter thickness, but also their rigidity and how well they stick to the surface of a V60, because a better sticking filter will slow down the upward escape of air and therefore slow down flow. Because flow rate is a function of many complex and intertwined factors, I also measured them with a simple experiment further down.
We can however make a prediction of flow rate, based on an idealized planar filter with a uniform thickness and circular holes. The theory behind it is given in some details here, but basically the only part you need is this one:
where q is the flow rate in volume of water per second through a pore, r is the radius of the pore, and t is the thickness of the filter. The hidden proportionality constants are related to the pressure drop above and below the filter, and the viscosity of water. The first term in the third power of r is called the Sampson term, and corresponds to the case with a filter much thinner than its pore sizes. The second term is called the Poiseuille term, and corresponds to a case where the pores are actually tubes much longer than their diameter. This combination of the two extreme cases is not perfectly exact, but it’s much simpler than the real solution, and it’s always a valid approximation within 1% of the real value.
We can use the equation above to transform the distribution of pore sizes into a distribution of flow, and by integrating the full distribution we can estimate the total idealized flow rate for each filter. Here’s what I found, grouped by brew method;
As you can see from the figures above, pores above ~20 micron are responsible for most of the flow in all cases. This means that my microscope resolution (each pixel is 6.8 micron) is able to resolve the holes most relevant to understand the flow dynamics. You might be surprised that the positions of the Hario tabless vs tabbed paper filters were swapped compared to the pure pore distributions (i.e., Hario tabless seemed a bit more uniform in terms of pore sizes, but less uniform in terms of flow); this is because the tabless filters have a slight over density of very large pores, which are much more important than small pores when we talk about flow. Hence in practice, this makes the Hario tabbed filters seem slightly preferable.
Here are the total idealized flow rates for all filters, obtained by integrating the flow rate distributions above:
After calculating these idealized flow rates, I went ahead and measured the real flow rates of all pour over filters above. I took the immersion dripper switch from Hario, put a filter on it and stuck a marked chopstick in the filter. I used 150g room temperature (25°C) distilled water to pre-rinse the filter so that it sticked to the V60 walls, turned off the switch so that it doesn’t flow, and added another 150g of room temperature distilled water.
I used an iPhone chronometer which allows you to place your finger on the “start” button and it actually only starts when you release your finger; this makes it easier to trigger both the Hario switch and the chronometer at the same time. Once the switch was opened, I used the iPhone LED light to pay careful attention to the water surface and I hit the chronometer again when it passed the black mark on the chopstick. I took 6 measurements per filter; this allowed me to get a better measurement and estimate my measurement errors with standard deviations. Here are the resulting filter flows:
Cafec: 5.79 ± 0.03 mL/sec
Hariotabless: 6.89 ± 0.05 mL/sec
Hario tabbed: 11.03 ± 0.02 mL/sec
Hario unbleached: 15.3 ± 0.1 mL/sec
V60 Cloth: 18.1 ± 0.3 mL/sec
Osaka metal: 67.8 ± 0.6 mL/sec
Chemex bleached: 7.23 ± 0.02 mL/sec
Chemex unbleached: 9.82 ± 0.02 mL/sec
The detailed data are available here. Keep in mind that flow can be affected by water viscosity, your grind size, filter clogging, etc.; so these values are most interesting when compared to each other in a relative sense. The error bars are mostly due to my ability to start and stop the timer at the right time; my standard deviation on timings across all filters was 0.2 seconds, and apparently the average human reflex delay is 0.25 seconds, so it seems credible that the reflex inconsistency be of that same order of magnitude.
Now let’s compare the idealized versus predicted flow rates, and see if they correlate well:
If the idealized flow rates were perfect, all filters would fall along a straight line in this figure. As you can see, it is not the case at all; it seems that filters made of different materials or with different creping behave differently. I think this is due in part to how they adhere to the walls of the V60, but I think that creping inside the filter may also contribute to slow down flow because water will prefer to flow mostly along the crepe valleys instead of everywhere on the filter surface. If that’s true, then filters smoothed on the inside would be preferable, as they would promote a more uniform flow across the filter surface. I won’t be able to determine whether that’s true or not with any more certainty in this post.
Another hypothesis I had is that the pores of paper filters may be better represented by diagonal tubes instead of straight up ones, in which case the “effective” thickness of the filter would always be a factor larger than their true thickness. While this may be true, I observed no clear correlation between filter thickness and how offset the idealized flow rate was from the real flow rate; this indicates that this effect is not the biggest cause for these differences.
The Tainting Effect of Filters
Another often discussed factor about coffee filters is how they might directly affect the taste of a coffee beverage by contributing chemical compounds to the coffee beverage. This is what can produce this undesirable papery or cardboard taste, and is often the quoted reason for why pour over filters need to be rinsed before brewing. To be sure, there are other reasons to do it; pre-heating the brewing vessel and making sure the filter is well positioned in it are also important reasons why we pre-rinse pour over filters.
I once did a preliminary experiment where I pre-rinsed Hario tabless and tabbed filters (both are bleached) and then immersed them in hot water for a few minutes, and tasted the water. I was not able to confidently say that I could taste anything different from just the tap water, so I concluded that I could use either of them without worrying about taste, at least if I pre-rinsed them.
But there is a more objective way to compare how much each different filter can taint your coffee beverage, with distilled water and an electric conductivity (EC) meter that measures total dissolved solids (TDS) in water. I decided to measure those by emulating a water temperature, contact time and water weight that are similar to typical brewing conditions. I put the dry filter in the Hario switch pour over device, turned off the flow, and poured 200.0g of distilled water (1 ppm) in the device, weighed with a 0.1g-precise brewing scale. I didn’t use more than 200g to avoid over-filling it. I then immediately put a cork lid on top of it for heat insulation, and waited 3 minutes before I turned on the flow switch.
I then placed the water in a small ceramic cup, which I covered with a plastic lid to stop evaporation. I waited a few hours until the samples came close to room temperature (I measured them at 27°C, and the room temperature was 25°C). I decided 27°C was ok because the TDS measurements had stopped changing between 40°C and 27°C, and waiting for the samples to cool more would have taken hours more still. The EC meter that I used applies a temperature correction, but it is not perfect so it’s best to remain within a few degrees of 25°C to get absolute TDS measurements. I made sure that all samples were measured exactly at the same temperature (27°C). To these measurements, I subtracted the 1 ppm solids that were already there in my distilled water. Here’s what I obtained:
Hario tabbed bleached: 0 ppm
Hario tabless bleached: 0 ppm
Cafec bleached: 1 ppm
Hario unbleached: 5 ppm
As you can see, bleaching actually does what it’s supposed to do, but for some reason, the Cafec filters seem to still have a small amount of dissolvable compounds left in them. These measurements are consistent with my being unable to taste any effect of the bleached filters in a water immersion, especially given that I had also pre-rinsed them. This also seems to lend credence to Scott Rao who didn’t pre-rinse the Aeropress or Whatman filters that he places on top of his high-extraction espresso pucks.
This all seems like a cautionary tale against using unbleached filters, but what I was testing here is the inherent ability of these filters to taint your coffee beverage if you don’t pre-rinse them. As I mentioned before, tainting is not the only reason we pre-rinse pour over filters, so I would certainly not recommend you stop pre-rinsing bleached filters, but there are scenarios like an upper espresso puck filter where pre-rinsing might not matter. Now what would be even more interesting to me is to ask a more practically applicable question: what happens to these numbers if we pre-rinse the filters ? And how much do we need to pre-rinse them ?
To answer these questions, I carried a different experiment that resembles two of the filter rinsing techniques that I used, applied to the Hario unbleached filters as a worst case scenario. The goal of these experiments was to see whether I can detect any additional dissolved solids imparted by the filter.
The first rinsing technique that I most often use is to first pre-rinse with cold tap water, because brew water is a bit more precious. I then pour a little bit of hot brew water to preheat the vessel, which is probably not that important as I use the plastic V60, but also to replace water suspended in the filter with my brew water that has the desired alkaline buffer. This is probably a bit overkill as the amount of retained water is small, but it’s an easy thing to do.
For the first experiment, I therefore used room-temperature distilled water (1 ppm) with the Hario switch, but this time I left the flow switch open. I poured three pulses of approximately 50 grams of water into three distinct cups, and then a final pulse of hot distilled water in a small ceramic container. I covered the ceramic container with a plastic lid and let it cool down to room temperature, so that I would get a more accurate TDS reading with the EC meter. Here’s what I obtained, after subtracting the initial 1 ppm to every measurement:
First pour (dry filter): Poured 50.3 g water, resulted in 5 ppm TDS in 47 g output, i.e. 3.3 g water was retained by the filter and 0.235 mg of filter material was dissolved.
Second pour: Poured 49.7 g water, resulted in 3 ppm TDS in 48.5 g output, i.e. 1.2 g water was retained and 0.146 mg of filter material was dissolved.
Third pour: Poured 49.9 g water, resulted in 1 ppm TDS in 49.6 g output, i.e. 0.3 g water was retained and 0.050 mg of filter material was dissolved.
Fourth pour (hot water): Poured 49.8 g hot water, resulted in 1 ppm TDS in 49.8 g output, i.e. no water was retained and 0.050 mg of filter material was dissolved.
This indicates that tap water removed most of the filter materials, but switching to hot water allowed to extract a tiny bit more, although I would be skeptical that 1 ppm of filter materials could be humanly tasted. It also seems that the filter was able to retain a total of 4.8 g water and contribute 0.481 mg of paper material.
For the second experiment, I used a similar method but used only hot water, and used 100 grams for my initial pour because I knew 50 grams would not be enough to remove all filter solids.
First pour (dry filter): Poured 100.0 g hot water, resulted in 10 ppm TDS in 95.8 g output, i.e. 4.2 g water was retained and 0.958 mg of filter material was dissolved.
Second pour: Poured 49.6 g hot water, resulted in 6 ppm TDS in 48.8 g output, i.e. 0.6 g water was retained and 0.293 mg of filter material was dissolved.
Third pour: Poured 49.3 g hot water, resulted in 0 ppm TDS in 48.5 g output, i.e. 0.8 g water was retained and no detectable filter material was dissolved.
In this case, it seems clear that 150g of hot rinsing water was enough to deplete the filter of all dissolvable materials. The filter seems to have retained a total of 5.6 grams of water, which is a bit more than the previous one. I don’t think this is due to the water being hot, but probably to how fast I moved the V60 when I picked it up. In this case, a total of 1.251 mg of filter material was dissolved, more than double what was dissolved in the first experiment. It’s therefore preferable to rinse unbleached filters with hot water, otherwise you’ll likely need a large amount of water. This is not true of any bleached filters; as you may recall, even the first pulses of water contained no dissolved solids.
One filter property that I expect could help achieve a uniform pour over extraction is how consistent the flow is across the surface of the filter. If pores are grouped in some regions of the filter, this will contribute to channeling, i.e. water will take preferential paths across your coffee bed and won’t extract in a very uniform way. To estimate this effect on the various filters I investigated, I calculated the total idealized flow in each microscope image that I took for a given filter and looked at the standard deviation of these values for any given filter. If a filter always seems to have a similar flow regardless of the position where I placed the microscope, that is a great thing, and it means that channeling should be minimized.
I defined the uniformity index (UI) simply as the inverse of the standard deviation of unit-less idealized flow across filter regions; this way, a higher UI corresponds to a more uniform filter. I did not assign any physical units to this index because it would require specifying the water viscosity, pressure drop across the coffee bed, etc. Hence, this index is only useful as a relative measure between filters, and it could not be compared with those calculated from any other pore detection algorithm or microscope. Here are the UI I calculated for all filters under consideration:
The error bars above are based on small number statistics. Remember that a higher UI is good; it means the filter will flow more uniformly across its surface. Notice how the unbleached filters come out at the top again !
As I already mentioned, I suspect that filters with a higher UI may produce a more uniform extraction, maybe even more so if you use small doses in your pour overs. However, what I do not know is the observable importance of this effect; the effect could be too small to matter in practice. This however opens up interesting practical experiments; for example, it seems possible that the Hario unbleached paper filters may allow us to reach more even extractions, and that would result in a higher average extraction yield when everything else is kept fixed.
Filter Clogging Index
A typical problem that one can encounter when brewing coffee is a sudden decrease in flow rate caused by very fine coffee particles clogging the pores of a filter. For this reason, the values of flow rate that I measured above must be taken with a grain of salt: if a filter flows very fast because it has very large pores, coffee fines might be able to clog them, and as a result the filter flow rate might decrease heavily during a brew, and depend strongly on your brew method (e.g. the volume of your bloom phase, whether you stir the slurry or not, etc.). If the filter flows fast because it is thin, the decrease in flow rate might be less important, and less sensitive to your technique.
It is possible to calculate a clogging index in an objective way, by calculating the overlap between the pore size distribution of a filter and the particle size distribution of a grinder. Obviously, different grinders (and different grind settings) will produce different amounts of fines, but using any reference grinder will provide a valuable relative assessment of how each filter is sensitive or not to clogging. The resulting clogging index will therefore be most important to consult when using a grinder that produces a larger amount of fines – in my experience, you generally get more fines with smaller burrs, conical burrs, or anything that can widen the full particle size distribution like misaligned burrs.
Not everyone seem to prefer grinders that produce the lowest possible amount of fines (so far, it seems to be my preference), but whatever your preference is, you should always try to avoid filter clogging. A filter will typically not clog in a uniform and immediate way, and that means you will get channeling as water starts to follow some preferential paths along the un-clogged filter pores. As you might already know, channelling will over-extract coffee along the channel paths, and cause astringency (a dry feeling in the mouth) in the resulting brew. Therefore, if you use a grinder that produces more fines, you should consider using filters that have smaller clogging indices. On the other hand, if you use a grinder that produces very little fines, this might be less important.
You might be aware that I wrote an app to measure particle size distributions, but I never used it in combination with a microscope, and without this it won’t be possible to use it to build a particle size distribution down to particle sizes as small as filter pores. In the future I will experiment with this, but for now I opted to use a laser diffraction particle size distribution instead. This distribution was generously sent to me by John Buckman and Scott Rao, who brought a sample of the Weber Workshops EG-1 (version 1) grinder with stock burrs to a laser diffraction device:
I used the web plot digitizer to extract data from that photo, and I calculated the clogging index of each filter by performing the following operation:
where f(x) is the pore size distribution of the filter and p(x) is the laser diffraction particle size distribution of the EG-1. The numerator includes a reversed cumulative density function of the pore size distribution because a coffee particle can contribute to clog any filter pore that is the same size or larger. Technically, the closest thing I’d know how to call this operation is the integrated product of a cumulative density function with a probability distribution function; it’s not a convolution.
If this looked like alien symbols and sounded like gibberish to you, that’s totally fine. Here’s what it comes down to: the clogging index is an estimation of the average fraction of flow that can be clogged by coffee particles smaller than 200 micron in diameter (100 micron in radius). If CI = 90%, it means that the average coffee fine smaller than 200 micron would be able to block pores that contribute 90% of the flow rate for a given filter, if you use enough coffee and agitate it enough. It does not correspond to the true fraction of flow that will be blocked by clogging, because that would depend on how much coffee you use and how large your filter surface is; some fines may be small enough to block 90% of the filter flow, but if they are not present in large enough numbers to block all the pores, or don’t ever come in contact with the pores, then that won’t happen. I don’t want to attempt to make these numbers representative of the true fraction of blocked flow, because not only it would become overly complicated, but it would also not be accurate because coffee filters are not idealized flat planes with circular holes. These numbers are however very useful to compare filters in a relative sense, to understand which filters might clog more easily than others.
Here are the CI I obtained for all filters tested here:
Remember that a small CI is good, as it means your filter is less sensitive to clogging. The trend of unbleached filters coming up as the superior ones seems to hold yet again !
I know this post contains a massive amount of information, so I’d like to distill some of what I learned into practical recommendations. Please keep in mind that those are based on limited experiments, which comes with its caveats: for example, it is possible that wetting the filters affects the pore size distribution, and it is possible that the more even pores across the filter surface only affects channeling in a very small layer of the coffee bed. Therefore the recommendations below should not be taken as an absolute truth, but rather as a guide for what seems most worth exploring with your next V60 experiments.
Metal Filters in General
Metal filters suffer from a major problem in my opinion: the pores are typically so large that the flow is only constrained by the grind size, and that will force you to grind very fine in order to obtain practical brew times. But this also means that a lot of fines will pass through the metal filter. Metal filters are also different from paper filters because they don’t filter with mazes of spacings between paper fibers, but rather just straight, large holes through an otherwise very uniform and flat filter. Therefore, metal filters won’t clog, and any fines small enough to penetrate the pores will end up directly in your beverage. This means metal filters won’t produce very clear brews like other pour over filters, instead they will produce a beverage with suspended solids and fines, with less clarity and more body. Personally, I’m not a fan of this.
Cloth Filters in General
Cloth filters suffer from a similar problem to the metal filters in terms of flow rate and large pores. However, they are in my opinion much worse because they are a hassle to properly clean and re-use. I wrote a lot more about this in my post about a high extraction yield siphon recipe, and I’d encourage you to read it if you want more information about proper management of cloth filters, but I just gave up on using them.
Paper Filters for Pour Over
Paper filters are much more interesting to me, as their smaller pores allow to prevent coffee fines from passing into the beverage. The coffee bed does a lot of the job at retaining coffee fines, but if the filter had larger pores, some of them (either those already at the bottom or those that migrated down) would still pass through.
One big take away point that I got from this analysis is that bleaching seems to deteriorate the quality of pore distributions. This is true in terms of the general spread in pore sizes, but even also in how much the flow of water varies across the surface of the filter.
This was surprising to me, as my initial bias was to disregard unbleached filters because of their tainting potential. But as we also saw above, it seems possible to remove all dissolvable solids with an adequate pre-rinse using 150 grams of boiling water.
I have not yet accumulated any practical evidence for this, but I suspect that using unbleached filters might both reduce channeling in all situations (because flow is more uniform across the filter surface) and make your brew more robust against clogging (because of the smaller number of large pores), all while having a slightly faster flow rate (because there are more pores per unit surface) !
The same conclusions hold for chemex, although their much larger filters almost certainly require more water during pre-rinse. I haven’t done this experiment, but based on their weight differences (5.4 g for Chemex and 1.4 g for Hario), I would recommend using about 4 times more rinse water.
Another result that surprised me was how the Hario tabless filters seem to be worse than the Hario tabbed filters on all metrics; they flow more slowly, have less uniform pores and are more susceptible to clogging compared to the tabbed filters.
The Cafec filters show all signs of being a weird case of filters that are bleached more gently; both their pore distribution quality and ability to taint water is in between the bleached and unbleached cases. If you are really afraid of paper taste or hate using a lot of water to pre-rinse your filter, they might be the optimal solution for you, but keep in mind that they flow much slower than other paper filters.
All of these conclusions can be visualized in the figure below, where I placed all paper pour over filters on a graph of clogging index versus flow rate; filters further toward the right are more easy to clog, and those toward the top flow faster. I also used larger symbols for the filters that have a more uniform flow across the filter surface, which means that larger symbols should be less susceptible to channeling regardless of your brew technique.
For Aeropress, we really only have two contenders here, as I doubt people will start ordering and cutting Whatman filters for their Aeropress brews. But even if you had the motivation to do so, it seems that the Aesir filters come out at the top in terms of their robustness against both channeling and clogging. They do seem to flow slower however, because they are almost twice as thick compared to the standard Aeropress filters; they have more pores per surface area than Aeropress, but probably not enough to make up for their thickness. However, remember that an Aeropress brews have another variable that is not accessible to pour over; you can press harder, and make up for that difference.
It seems to me that Aesir filters are therefore more desirable for Aeropress brews, which did match my very limited and subjective experience.
What does a Clogged Filter Look Like ?
I decided to also try something fun with the microscope, and imaged a clogged and dried V60 Hario tabless paper filter:
We can clearly see that pores are stained with a brown color, perhaps caused by coffee oil, but we cannot see obvious fines blocking the entrance of a pore. That isn’t too surprising, as we might expect clogging to happen a little deeper than the filter surface.
I hope you enjoyed this post ! It is definitely the one that required by far the largest amount of work yet, but I think it was worth it.
I’d like to give a special thanks to Alex Levitt for sending me Cafec filters, and Scott Rao for giving me Chemex bleached filters, and for useful discussions without which I would not have thought about the possible importance of the uniformity index. I would also like to thank Doug Weber for useful comments.
I recently read James Hoffman’s fantastic book The World Atlas of Coffee and followed the also fantastic new Terroir course at the Barista Hustle web site. All of this reading motivated me to think a bit more about coffee varietals when I’m enjoying a cup of coffee. Previously, I had noticed some obvious taste differences between varietals, like the fact that typical Kenyans such as the SL28 varietal tend to have a nice taste of blackberries (or tomato when the roast is underdeveloped) but I did not think about it much further.
His book also made me realize that I couldn’t find much information about the typical taste profiles of different coffee varietals or processing methods, other than anecdotic facts and tasting notes of individual roast batches. Clearly, there is a ton of subjective tasting notes available out there, and I thought if we could only collate a big pile of them, I could probably distill it and see if some interesting trends come out of that.
I decided to contact Alex, a friend who built a really cool mobile application (for iOS and Android) called Firstbloom, where they actually did just that. They allow users to build their own personal library of various roaster’s bags and consult other people’s ratings. One really nice thing about it is that unavailable past offerings don’t disappear (some day, someone will need to explain to me why roasters always completely delete web pages of their past offerings, rather than just unlink them). Anyway, Alex was super happy to help me with this idea, and he generously sent me his metadata on 1,500 coffee bags with varietals, tasting notes and processing for every one of them ! Alex and his team built Firstbloom as a passion project (much like my blog), and I’m highly appreciative of their work and precious help with this idea. So, in a way, today’s blog post was sponsored by Firstbloom’s incredible efforts at collating these data, otherwise it would not have been possible.
Taste Descriptors by Coffee Varietal
The first thing I decided to investigate is the taste descriptors that come up most often for each coffee varietal. For this I only used coffee processed with the washing method, because it is the most abundant and I also think it is the process that will bring up varietal characteristics most clearly without influencing them (don’t tell Scott Rao, but there are some naturals that I love even if I think they distort the tasting profile). A very neat tool to visualize such data is a word cloud; each word is displayed with a size representative of how often it came up in a list. There are some Python packages that do basic word clouds, but I found out this website that offers way more options. Coding that from scratch seemed like an annoying enterprise, so I decided to just use it.
I did not just collate all of the taste descriptors and count the number of repetitions when I assigned weights to each word, the way one would typically build a word cloud. This would be an ok way to do things, but it would not necessarily amplify the differences from one varietal to the other. As you can see in that figure, there are some words that come up way more often than others when describing any kind of coffee:
These descriptors are not the most interesting to me, as they are the ones that come up most often regardless of varietal. What I would rather want to see are the specific descriptors that come out in one varietal more than in others. To do this, I counted the number of times a descriptor happened within a varietal, and normalize that to the amount of times it happened in any coffee, hence the descriptors in larger fonts above will be somewhat muted. In other words, if a taste descriptor happens a lot for SL28 and not that much for other varietals, it will be amplified more than a descriptor that happens a lot for SL28as well as any other coffee. There is one potential drawback of doing this: Imagine there is just one bag of coffee ever that had the taste descriptor carrot. It would end up being extremely amplified in the word cloud of the one varietal where it happened, because it was never used for any other coffee. To mitigate that effect, I put a “ceiling” on the level of amplification that rare words can obtain; I decided that no word could be amplified by a factor larger than 3.3 because of its rare use in other coffee varietals.
Now, the fun part ! Here are some collections of taste descriptors for some of the most widespread coffee varietals:
This already jumped out as very representative of my experience. Ethiopian heirloom coffees often taste very floral and have a distinct citrus-like character often described as lime (see this great book review by James Hoffman where he talks a bit more about “heirlooms”). As I expected, SL28 is largely dominated by descriptors like blackberries or black currant. Just writing this makes me want to brew a good Kenyan cup. The Geisha varietal seems dominated by floral and fruity descriptives, my personal favorites (I’m so original). One thing that surprised me a bit more is how Caturra and Bourbon come out quite similar. But this is not actually that surprising, because Caturra arose from a naturally occurring mutation of the Bourbon varietal (as described at World Coffee Research).
There are some significant caveats I should add to these results. First, there are some taste descriptors that are caused by roasting more than varietal. I suspect that some varietals like Caturra are a bit harder to roast properly, and to diagnose once they are roasted compared to most Kenyan and Ethiopian coffees. If I’m right about this, then there will be some part of the unique characters of Caturra above that might be caused by a less optimal average roasting, and not by genes. For example, I suspect that some nutty descriptors might be part of that category.
Another likely bias comes from terroir, which might have a strong effect on taste; by terroir, I refer to the type of soil, weather, shade and other aspects of how farmers take care of their crops. Add this to the fact that some countries like Kenya often grow a very selective list of varietals (e.g. SL28, SL34, Batian and Ruiru 11), you will end up with a strong varietal versus terroir correlation. This means that some of the taste descriptors coming up in SL28 above could have more to do with terroir than actual coffee genes. In order to tell them apart, we would need a lot more data on typical Kenyan crops grown outside of Kenya. If we look at the word clouds of these four particular species next to one another, it might make you worry even more about this strong correlation:
These four varietals are also sometimes grown, roasted and sold as a blend , so the taste descriptors for the three species will also tend to be somewhat mixed together, even in the unlikely scenario where there was no effect from terroir.
Although these word clouds are biased by terroir and roasting, they are still super useful to me, because the bags of coffee that I’m gonna drink are also affected by the same biases. From a user perspective, it’s therefore really fun to know which varietals will typically get you in what kind of taste territories. I would however bet that in 10 years, a typical user experience might shift far from the word clouds above.
But even this more limited use of the word clouds above is not perfect, because there’s yet another effect that clearly taints these word clouds, and will make them a little bit less reliable as a guide to which coffee you want to buy: human bias. I found that roasters will very rarely write tomato on their bag of Kenyan coffee, even when it tastes like nothing else but tomato soup. This is not surprising, because tomato it is widely known as a roast defect form under-developed Kenyan coffee, so it would be a bit of a bad self publicity to write that on a bag of coffee. Therefore, there are some “surprise” taste descriptors that won’t end up in the word clouds above, but may end up in your cup of coffee !
Taste Descriptors by Coffee Processing
Another aspect that is widely known to affect the taste profile of a cup of coffee is the process by which the pulp is removed and the coffee beans are dried, generally referred to with the umbrella term processing. So, I decided to make similar charts, but this time grouping bags by processing rather than varietal. This is what came up for the two most dominant processing methods, washed versus natural:
Hahaha, that was just a joke ! Here’s what really came up from the actual data on natural-processed coffees:
I may be joking about it, but there are a lot of naturals that don’t actually taste dirty at all. I enjoy these “clean” naturals much more than the other ones, but that’s just my preference. For example, all natural coffees I ordered from Gardelli yet were very clean, and I loved them.
The same limitations that I mentioned above still apply here, plus a new one: some varietals tend to never be natural-processed (e.g. the typical Kenyan varietals) or vice-versa, and that will introduce some correlation between varietal and processing, further biasing the two word clouds above. I remember reading that the way “washed” coffees are processed in Kenya versus Colombia is also very different, so that’s yet another bias !
Speaking of human biases, here’s a really funny observation:
One of the top descriptors of honey-processed coffee is honey… hmmmm suuure, I’m very skeptical that this is not just tasters influenced by the actual process name. I would bet a full dollar that other descriptives in the sweet category might replace it if we did this blindly.
While I showed you the word clouds for the main categories, I generated a lot more of them. I will all gather them at the end of this post so that it doesn’t get too cluttered with figures !
I decided to split the wheel in two parts, where all the flavors generally seen as positive are on the top half, and those generally seen as less desirable effects of roast or green coffee are placed on the bottom half. And while we’re talking about halves, why not make it look like a coffee bean ? I used an elliptical coordinate system to make it look a bit more like a coffee bean. Once you get familiar with these figures, they can tell you a lot about the coffee just from a quick glance, and I love that. Here are the ones I generated for the main varietals; there are similar figures for 30 varietals and 14 coffee processing methods (with high-resolution vectorial PDF versions) which I made available to my Patreon supporters (Bourbon-tier and up):
As you can see, Geisha is the king of floral attributes ! It’s also interesting how Bourbon and Caturra often have nutty flavors typically associated with roasting. It makes me wonder whether it’s harder to make great roasts out of them, but I don’t know enough about roasting.
Ranking Specific Flavors
There is yet another way of visualizing these data that would be interesting; that one is an idea from Scott Rao. I selected a few taste descriptors that are often sought for by coffee drinkers, and ranked the different varietals and processes by how often they come up in their respective categories. This time I didn’t normalize the fractions by how often they come up in all coffees, because it won’t affect the order of rankings. I did however add error bars (for the math geeks, Poisson errors) to represent the small-number statistics; in other words, when a given varietal/process is represented with less bags, the true fraction of how often it’s described with one word will be more uncertain because we don’t have enough data to constrain it well.
Something interesting Matt Perger noticed in the “Sweet” figure is that smaller beans varietals tend to be on the sweeter side, which could be explained by the more even surface versus core roasting of small beans.
I hope you found this analysis as interesting as I did ! I’d like to thank Scott Rao, Matt Perger and Patrick Liu for useful thoughts and comments, as well as the developers of the Firstbloom app again !
Now, here are more figures I generated ! I made many more ranking figures, available to my Patreon supporters.
Varietal Word Clouds
Processing Word Clouds
If you loved the figures in this post, there are many more like them available to my Bourbon-tier Patreon supporters here, for 30 coffee varietals and 14 processing methods !
The picture above sent by my friend Francisco Quijano is an awesome demonstration of how different a V60 (left) and Aeropress (right) brews of the same coffee may look like.
I’d like to talk about why coffee brewed by immersion (e.g. french press, Aeropress, siphon) tastes so much different, and even look so different, than coffee prepared by percolation (e.g. pour over or drip). Some of you may have noticed that this holds even when you compare them at similar extraction yields and concentrations.
In a previous post, I talked about a more general equation for extraction yield that should provide a better correlation with the chemical profile of a coffee cup, and therefore with its taste profile. Obviously, it doesn’t capture effects like changing coffee, roast curve, or even grind size. But there’s something else fundamental that the general equation cannot capture, because the taste profiles generated by immersion and percolation brews just live in different landscapes. Today I want to explore why that is.
The crucial difference between a percolation and an immersion is simple: a percolation extracts coffee with clean water, and an immersion extracts coffee with water that is gradually becoming more and more concentrated, because water sits in with the coffee grounds for the whole brew.
Because of this, the speed of extraction levels off more quickly in an immersion brew. This arises from the physics of diffusion; any solvent more concentrated in a specific chemical compound will have a much harder time extracting that same compound from the coffee grounds. This concept is described by the Noyes-Whitney equation:
You can read more about the different terms of this equation here, but basically this just tells you that the rate at which a compound gets extracted is higher when the solution is much less concentrated in it than the coffee particle.
So far, it would seem like this only explains why an immersion would extract slower, not why it would extract a different profile of chemical compounds. But there’s a catch: even if water is concentrated in a specific compound, it doesn’t prevent it from extracting other compounds efficiently. Therefore, if you wait long enough, an immersion brew will very closely reflect the chemical composition that was initially in the coffee bean, as each individual chemical compound comes to balance with the slurry. If you stop the brew before everything is extracted (which we usually do), the slowest-extracting compounds will be a little bit underrepresented, but otherwise the chemical composition of your cup will be a pretty good reflection of the chemical composition in the coffee bean.
In a percolation brew, things happen very differently. This is true because at every moment, the slurry water is replaced with cleaner water, therefore forcing the extraction speed to remain high as long as it’s not depleted from the coffee bean. As you might deduce, this means that the fast-extracting compounds will be over-represented in a percolation brew.
In other words, the chemical profile of a percolation brew will be very strongly correlated with their extraction speed, whereas an immersion brew will be instead strongly correlated with how abundant each chemical compound is in the coffee bean. It’s like listening to music with two different equalizers on.
I like explanations with words, but I like figures even more. We can explore the difference between percolation and immersion brews by simulating two different brews with a very simple toy model, based on solving the Noyes-Whitney equation numerically. In the percolation case, the slurry concentration term will always be forced to zero as we constantly replace the slurry with fresh water.
Let’s imagine we have a coffee bean with 30 different chemical compounds; and put them in a coffee bean with different abundances and different extraction speeds. I generated 30 such chemical compounds at random, and obtained this distribution:
Each red circle here is one of 30 simulated chemical compounds. Those further to the right are present in larger quantities, and those further up are easier to extract.
Now, let’s solve the Noyes-Whitney equation for one of them. Here’s how the brew concentration goes up over time for the fastest-extracting compound:
This should be nothing surprising: the extraction speed levels off much earlier during the immersion brew, because the slurry water becomes too concentrated. In the percolation brew, the extraction is still happening for as long as the chemical isn’t depleted from the coffee particles.
Now, we want to compare these two beverages at the same extraction yield. To do this, I generated the extraction of all 30 compounds simultaneously, and stopped the brew when the average extraction yield reached 20.0%. I made the assumption that the chemical compounds that can be extracted from the bean amount for 28.0% of its mass. Unsurprisingly, the immersion brew took a bit more time to reach that average extraction yield.
Something really fun we can do with this simulation is look at the profile of chemicals in the final cup for the immersion versus percolation, and compare it with the chemical abundances in the coffee bean. This is what we get:
Each bar in this figure represents one of the 30 chemical compounds we generated randomly. I placed them in order of extraction speed; those further to the right extract faster.
One thing that immediately jumps is how the immersion brew (red) is much more similar to the internal coffee composition (black) compared to the percolation brew (blue). The only difference lies in the compounds that are slowest at extracting, as expected. If we let the immersion brew continue, this difference would become smaller and smaller, and eventually subside completely.
The percolation brew looks quite dramatically different from the internal composition of the coffee bean ! As you can see, those compounds that extract fast become completely over-represented compared to the internal coffee composition.
As it’s already quite clear from the figure above, an immersion brew composition correlates mostly with the abundance of chemicals inside the coffee bean:
This correlation is quite strong as you can see, with the exception of the 6 slowest compounds that are still out of balance because we stopped the brew before an average extraction yield of 28.0%.
Here’s another interesting observation: a percolation brew correlates strongly with how fast each chemical compound can extract:
As you can see above, the compounds further to the right are much more represented in the cup of percolation coffee, whereas they are not necessarily over-represented in the cup of immersion coffee.
All of these considerations only hold true because we stop the brew before the maximum theoretical extraction ceiling. If we were to extract everything from the coffee beans, then obviously the immersion and percolation brews would end up with the exact same chemical profiles, the brew times would just be different, and the concentrations would also be different if you used different quantities of brew water. To demonstrate this, I let the simulation run all the way to 28.0% and made a video of how the flavor profiles extract. You can see that, although the beverages converge to different concentrations, they both end up with the same profile than the coffee bean composition:
Obviously, there are other complicating factors that can make different types of brew even more different. For example, the presence of channeling in a percolation brew can bring out a lot more of the slow-extracting (usually astringent) compounds from a small fraction of the coffee particles, which as far as I know never happens in an immersion brew. But if you make sure to minimize channels in your coffee (I give some tricks on how to do that in my V60 recipe and pour over video), this won’t be a significant effect.
Another potential difference is suspended solids. Those are almost always filtered out by the bed of coffee itself in a percolation brew, whereas they will remain in your cup in simpler immersion brew methods like the french press. These compounds can have a strong effect on taste (usually muting it), even if they are not dissolved in water.
I’m sure some of you were surprised when I listed the siphon and Aeropress as examples of immersion brews at the start of this post. I know they are mixed methods, but in practice their tastes bear more resemblance to an immersion. I suspect that the reason for that is simply that most of the extraction happens during the initial immersion phase, not during the subsequent percolation phase. But surely, their chemical profile probably looks like some average of the percolation and immersion profiles.
You might think that this whole post is defeating the usefulness of the general extraction yield equation I mentioned before, but I don’t think it is. I will need a mass spectrometer to prove it, but I think that (1) for a fixed coffee and method, it will make the extraction yield measurement depend less on the amount of retained water; and (2) for different brew methods, it will make the comparison a little bit better, even if it’s never perfect. The comparison will certainly be made much better for two different methods in the same category, e.g. a Buchner percolation (without immersion phase) and a V60.
Before closing, I’d like to add a caveat to the analysis above; what I carried here is a simulation of random chemical compounds that don’t necessarily exist, just to demonstrate the concept of how extraction happens differently in immersion versus percolation. It is a dimension-less analysis (i.e. it does not involve any physical units), and therefore it does not indicate how significant these differences between percolation and immersion are. I do not know whether they are the cause for 1% or 50% of the taste difference between percolation and immersion (the rest of the difference would be colloids, fines, etc.), but my guess is that it is much less than 50%. One way to test this would be to perform blind test comparisons of a Hario immersion switch and V60 brews, but keeping all other variables constant will be a real challenge – just think of how the slurry temperature evolves during the brew in both cases; depending on the kettle temperature, constantly changing the brew water versus keeping the same water in an immersion will have a significant effect on the temperature profile, unless extreme caution and precise instruments are used !
I’d like to thank Francisco Quijano for sending me his awesome photo that serves as this blog post’s header, and Matt Perger for useful comments. I’d like to thank Aurelien He for proofreading comments.
Today I decided to measure how repeatable and consistent my manual V60 pour overs are. My expectations were very low, given how variable an average extraction yield I often get when I brew the same coffee a few days apart.
To do this, I used some older coffee I had left from a local roaster to prepare five V60 pour overs in a row. I started by preparing a gallon of water with the Rao/Perger water recipe described here so that I wouldn’t need to switch gallon, and to therefore mitigate any possible manipulation error when I prepare my brew water. The coffee beans I used are the Quintero Ignacio Colombian (a mix of Caturra, Typica and Tabi varietals) from Saint-Henri coffee roasters, roasted on February 25 2019, which I kept vacuum sealed in the freezer between then and the date of the experiment, May 26 2019. I took the beans out of the freezer about a week before the experiment, and opened the vac sealed bag right before brewing. Its roast profile is on the slightly dark side, where you get some hints of smoky flavors.
I used grind setting 7.0 at a 700 RPM motor speed on my Weber Workshops EG-1 grinder. It is zeroed so that burrs touch completely at 0.0, so 7.0 means that the burrs are spaced 350 microns apart. I used the plastic Hario V60 with the tabless Hario V60 bleached filters. I used the brew recipe that I described in this post and that follows Scott Rao’s method except for a few modifications. You can also find a video of this method here (pardon my poor filming skills, I will eventually make a better video).
I used a 22 grams dose and a total water weight as close as possible to 374 grams to achieve a 1:17 ratio. I prepared a nest shape with chopsticks as I described here. I tried to aim for 77 grams of bloom water; this is a bit higher than the 3:1 bloom ratio recommended Scott Rao, but I typically find it easier to quickly wet all grounds with that much water. I “rao-spun” the bloom quite heavily after pouring ~77 grams of water in, to ensure that all grounds are wet, and I used a chopstick to pop any bubbles that were forming. I did not use a spoon to stir the bloom. I used a 45 seconds bloom in all cases.
I pre-heated the kettle to 187°F while I was grinding the dose, pre-wetting the filter thoroughly (first with tap water and then with brew water), and preparing the coffee bed. I then boiled the water to 212°F right when I needed it, to avoid having minerals precipitate during a long boil (I’m not sure yet how important this effect is). I did not click my grinder, which causes it to retain 0.5 grams coffee instead of < 0.1 grams, but this also causes much less chaff and fines to be present in the dose because they preferentially stick to the grinder chute. I also used the Weber dose preparation shaker, which helps distribute fines uniformly throughout the coffee bed.
I tried to be as consistent as possible during my five brews – I think the hardest part is keeping a constant flow rate (the newer Acaia Model S scale may help with that because it apparently measures live flow rate, but I don’t have it), which resulted in slightly different brew times. I always initiated the second pour at 1:45, which helps discriminating which part I poured faster or slower when the times differ. I used the Brewista artisan gooseneck kettle which helps achieving a consistent flow rate, but it also means I had to press “quick boil” again every time I put the kettle back on its base (turns out I did not forget to do it during the five brews).
All brews had a very flat coffee bed at the end, and all were level except for the fourth brew which was very slightly slanted with the higher up side away from me (i.e. water drew down at the furthest point from me less than half a second before the closest point). When the surface of water passed that of the coffee bed and I could see light reflecting on the surface of the wet coffee bed, I noted the brew time, waited about 3 seconds and placed the V60 on top of a small recipient with the same aperture than the plastic V60 inner plastic ring. I gently swung the V60 up and down to collect 5-10 drops of coffee to determine the approximate concentration of interstitial liquid in the slurry at the end of the brew. This is useful to determine a more accurate average extraction yield that is more independent of the amount of retained water; for a detailed discussion on this, you can see this blog post and this one too.
I cleaned the VST refractometer lens with alcohol and re-zeroed it with distilled water, then measured the concentration of the last few drops and of the beverage using the recommendations of Scott Rao (also see this awesome guide by Mitch Hale). During this experiment, I realized that even if your refractometer measures a 0.00% concentration for distilled water, it is still very important to re-zero it; my TDS readings would otherwise be 0.10% too low because the weather is getting warmer in Montreal and I had not re-zeroed in more than a month ! You can find more details about this on my Instagram page.
Here’s how the five brews ended up comparing to each other:
Weight of bloom water:
Brew 1: 77 grams
Brew 2: 75 grams
Brew 3: 76 grams
Brew 4: 77 grams
Brew 5: 77 grams
Full span: 2 grams Standard deviation: 0.9 ± 0.2 grams
Time where I reached 200 grams:
Brew 1: 1:07
Brew 2: 1:11
Brew 3: 1:11
Brew 4: 1:10
Brew 5: 1:09
Full span: 4 seconds Standard deviation: 1.6 ± 0.3 seconds
Time where I reached total water weight:
Brew 1: 2:25
Brew 2: 2:23
Brew 3: 2:23
Brew 4: 2:19
Brew 5: 2:13
Full span: 12 seconds Standard deviation: 5 ± 1 seconds
Total time at drawdown:
Brew 1: 3:04
Brew 2: 3:09
Brew 3: 3:05
Brew 4: 3:10
Brew 5: 3:08
Full span: 6 seconds Standard deviation: 2.6 ± 0.4 seconds
Brew 1: 322.3 grams
Brew 2: 325.6 grams
Brew 3: 325.5 grams
Brew 4: 325.9 grams
Brew 5: 325.5 grams
Full span: 3.6 grams Standard deviation: 1.5 ± 0.4 grams
Concentration of the last few drops:
Brew 1: 0.59%
Brew 2: 0.58%
Brew 3: 0.56%
Brew 4: 0.51%
Brew 5: 0.47%
Full span: 0.12% Standard deviation: 0.051 ± 0.009%
Concentration of the beverage:
Brew 1: 1.42%
Brew 2: 1.41%
Brew 3: 1.42%
Brew 4: 1.43%
Brew 5: 1.43%
Full span: 0.02% Standard deviation: 0.008 ± 0.002 %
As you can see, my timings varied by some amount, but the effect on the concentration of total dissolved solids and average extraction yields were quite small. Another really interesting part is the fact that the approximate “shareable” average extraction yields varied by more than those calculated with the more exact formula. This may be explained by the fact that the more exact formula better compensates for different liquid retained ratios, likely caused by my having waited less or more before I removed the V60 from the coffee pot.
I honestly did not expect to reach a consistency of < 0.02%, close to the inherent precision of the VST refractometer (0.01%), but it seems that with enough concentration it is possible ! I do not think that I can reach this kind of accuracy first thing in the morning when I usually prepare my coffee. This experiment did teach me something important however: it is of utmost importance to be really careful in cleaning up the VST lens with alcohol, properly re-zero it with distilled water, and be patient while the sample reaches the lens and room temperature. Neglecting any of these steps can cause measurement errors much larger than 0.02% !
Today I decided to release publicly one of the V60 videos from my Patreon. I plan to make a better quality video eventually for my blog, but in the meantime I thought this would be interesting to a wider audience. Please view this recent post I made about what is going on with Patreon if you are worried that I’m making some of my content access-restricted, and this previous blog post explains the method I use here in more details.
You can find a higher-resolution version of this video here, but be warned that it is 1.2 GB large !
In this video I’m brewing Gardelli’s Ethiopian natural Chiriku with a 1:17 ratio and 22 grams dose. I used grind setting 6.8, slightly finer than my usual 7.0@700RPM, because last time I brewed this coffee, it felt a bit watery. Turns out I preferred it at 6.8, and I rarely get astringency at that grind setting. I’ll tell you more about that in a different post, but I now suspect that the average mass of coffee particles is an important factor that determines channeling, because it has a lot to do with the structural integrity of the coffee bed. I therefore suspect that there is a lower limit in grind size that will get you some astringency very easily for a fixed brew technique; it corresponds to the point where channels are being dug by water. Most of the time I still use a 7.0 grind setting, just to be sure.
You will notice in this video that I spin a bit harder than I used to. This is partly because I was being too gentle especially in the first video, but it is also because I realized I have so little fines with the EG-1 grinder + SSP ultra low-fines burrs that fines migration is not as much of an issue compared to other grinders. I suggest to start very gentle, and then try more brews (with the same coffee) where you gradually spin a bit harder. If your brew time goes up significantly, then you might want to go a little easier to avoid fines migrating to the bottom of the brewer. You’ll have to find your own pace, as I suspect it depends (slightly) on the grinder you’re using; in general, a higher quality grinder should allow you to spin a bit harder.
I had pre-rinsed my Hario tabless filter before starting the video, first with a lot of tap water and then with a bit of warm brew water. The kettle was also preheated at 189F before I started the video. My pours will all be made with boiling water, but preheating it at 189F means I’ll have to wait less when I’m ready to pour. The first thing I do is weight and grind my beans – I weighed 22.3 grams and ground 1-2 beans to make sure nothing was stuck on the grinder burrs from yesterday’s brew. I then cleaned up my blind shaker and placed it back on for the main grind.
You’ll notice that during the bloom pour, I don’t concentrate too much on my pour technique: I move horizontally a bit too fast and I also move vertically which I ideally shouldn’t. Instead, I make sure I have high flow and to stop at the right amount. My goal here is to wet everything quickly rather than immediately getting a perfectly level bed. I’m also giving it a much more thorough spin after that pour because I found that helps with getting everything wet at once.
You can see that I spent a short amount of time removing the high & dry grounds with my pours, but otherwise I described a very slow flower pattern that hits the center more often than the sides. I’m trying to get the whole bed agitated by doing that, with more focus on the center because there’s more layers of coffee there. I move very slowly because I want the water to fall very straight (this helps getting a flat coffee bed), and I don’t move vertically. I try to get a very steady flow too, but that’s the part I’m still the worst at without the ability to measure it on-the-go.
When I spin after the first pour to 200 grams, you can see that two bubbles appeared. That is generally not a good sign, as it means some brew water just touched dry coffee. The fact that it happens while I was spinning tells me that I probably just destroyed a channel and forced the water to flow through dry coffee. That doesn’t mean the brew will necessarily be bad, but it means I could have done a better job during the bloom phase. It happens to me 5 times in the last 14 brews, so about a third of the time. This is one reason why I’d really like to have a plastic V60 brewer with a steep & release mechanism (I know about the Clever but I don’t like its shape); it would allow me to stop water from flowing during the bloom, and probably give me enough time & control that I would be comfortable with mixing the bloom with a small spoon.
If you wonder why I tap the cork lid before putting it on the V60, it’s not from an obsessive compulsive disorder, but rather to make sure that there’s no coffee grounds on it (or at least I like to tell myself that).
Notice how clear the water is at the end of the drawdown. This is because the EG-1 with SSP burrs produces a crazy small amount of fines at the optimal V60 grind size. It reminds me of when I experimented with the Melodrip, but now I get even after having agitated the coffee bed. If you pay attention at the end you’ll see that I stop the scale’s timer exactly when the reflection of light from water above the coffee bed ceases because water just went below the height of the coffee bed. I like to use this cue because it’s very repeatable, and it might help you compare your own brew times with mine more precisely.
At the end of the brew, I let the V60 drip a bit more into the beverage, then I place the V60 on top of a small glass and gently move it up and down to get a few more drips and measure the approximate TDS of the slurry at the end of the brew.
After that, I clean up the refractometer and measure the beverage TDS, but I make sure to taste it before looking at the TDS measurement, otherwise I found that it can affect my taste perception. Also notice how I mix the brew with a spoon before sampling it; this is better at mixing up all coffee layers than just spinning the brew.
I know the ending is a bit abrupt, sorry about that – my iPhone ran out of storage ! You just missed the brew TDS measurement. I’m starting to be more satisfied with this angle of view, so I’ll start thinking about how I can make a more complete brew video that I can eventually publish on my blog. I’ll make sure I don’t wear slippers for that one.
I realize I haven’t talked a lot about my Patreon page on this blog yet, so I thought I’d update you all about it in a short blog post. I might remove it later, if it becomes irrelevant, and because I am trying to make this blog a repository of useful resources rather than updates on my whereabouts (for that, you can see my Instagram).
The reason I created a Patreon is to buy some expensive equipment that will allow me to push my coffee posts further, but rest assured I have no intention for these posts to remain only accessible on Patreon. You can find more about these future plans in one of the public posts I wrote on Patreon “Some Future Projects I Have in Mind“. There are a few more posts directly on my Patreon that are public and won’t make it to this blog, because I they are not directly relevant to it, or I don’t feel their content is best explained there.
I don’t want anyone to feel forced to contribute to my Patreon, rather I’d like it to be only for the more “hardcore” fans who really want to contribute regardless. I do offer some benefits to my backers following the Patreon model with tiered donations, but these benefits are either not refined enough for being on my blog yet, or they are things that I never planned to share publicly on this blog. For example, I share multiple Patreon-only videos that are “in development”, either because the quality is not there yet, or because the content is not final. Stay tuned for such a video to be released here later today.
An example of something I do not plan to share publicly is my (almost) live-updated personal coffee log (although I will share some stats about it), and my running list of experience with different roasters. I do eventually share publicly the roasters that I prefer, but I don’t share publicly those that I didn’t like – I feel like this would be a bit too hostile. So in conclusion, I view my Patreon as a “backstage” access to the stuff that is in development, rather than anything that should replace the blog posts that I will keep making public here.
I’m hoping this will address some fears I read about online, and stay tuned for a V60 brew video later today !
[Edit May 28, 2019: The violin plot and two of the other plots below had an axis that stated micron squared and should have been millimeters squared; those are now fixed. Thanks to Mark Burness for noticing !].
As some of you know, I recently decided to make the move and get the Lyn Weber EG-1 grinder with the SSP Ultra low fines burrs. I took this decision mainly because I heard this combination generates the lowest amount of fines other than industrial roller mill grinders, but also because of its design focused on single dosing and low grind retention. I like to switch coffee every brew, so those are very nice features for me. I’ll make a more detailed post where I compare the EG-1 with my previous Baratza Forté, but in the mean time I’d like to talk about burr seasoning.
If you never heard the term seasoning, it refers to the habit of grinding a large quantity of roasted coffee (or even rice) to break in grinder burrs which initially have harsh angles and corners. It is often said that this is done to prevent grind size from changing with use, and to obtain a more uniform grind distribution, which maximizes the average extraction yield of good-tasting espresso or pour over brews. When I seasoned my Baratza Forté, I did it with 12 pounds of roasted coffee at espresso grind size. Back then, I didn’t have a good way to measure the particle size distribution of my grinder, and I just supposed that I was done.
It is possible to diagnose whether you are done seasoning your grinder with a refractometer, by actually brewing coffee and noting the maximum average extraction yield you are able to reach without getting astringent taste. This typically takes me a couple of brews, and giving the limited time I had to do this I just ground a large amount of coffee and called it a day.
Now that I wrote an application to actually measure grind size distributions, I decided to take a sample of coffee every few pounds while I was seasoning the SSP burrs of my EG-1 grinder. I zeroed the grinder position at grind setting 0.0, which means that this is the point where burrs touched (I can hear burrs that start to rub against each other at grind setting 1.5). I initially started seasoning 2 pounds with a 700 RPM (revolutions per minute) motor speed and grind setting 8.5, which means that the burrs were 425 microns apart; turns out this was closer to a V60 grind size, so I went down to grind setting 5.0 (250 micron burr spacing) and 800 RPM after that. The slightly higher motor speed made sure that the motor didn’t stop from time to time as I fed a lot of coffee in the grinder. After 12 pounds, I even went a bit faster (1000 RPM) for the same reason. I sticked with this setting all the way to 24 pounds; I went all the way to 24 pounds because I heard the SSP burrs were particularly hard to break in.
I used a collection of beans from bad roast batches at my local roaster to do the seasoning, so they consisted in mix of roast profiles and bean varietals. However, after every 2 pounds of seasoning with the mixed coffee, I always took a small ~10 grams sample of the same bean, a washed Bourbon from Burundi (roasted by my friend Andy Kires at the Canadian Roasting Society), which came from a single roast batch. I always made sure to purge the grinder of any grounds from the seasoning before collecting the sample, and I ground and threw away a small amount of the Burundi just before grinding the actual sample to make sure none of the seasoning coffee was left in. I always collected the Burundi samples at grind setting 8.5 (425 micron burr spacing) with a 700 RPM motor speed.
The Particle Size Distributions
I decided to measure the particle size distribution of half the samples (every 4 pounds), because this takes a crazy amount of work; for each sample, I took 12 images that I analyzed and combined with my grind size application. I didn’t count exactly how many hours this took, but it was about 2 seasons of The Office.
In this figure, the thickness of the horizontal band represents the total mass of the particles at each particle surface (this is called a violin plot and it’s great to compare several distributions together). This allowed me to see for the first time how the particle distribution moves to coarser particles as the burrs are breaking in. It makes a lot of sense that the distribution moves to coarser sizes, as the more rounded edges of the burr’s teeth should allow slightly coarser particles to pass through.
In the figure above, I show how the average particle surface changed with the total seasoning weight. The error bars are based on small number statistics (for the statistics geeks, they are based on Poisson distributions), and represent the fundamental limit in precisely measuring the average particle surface from the limited number of particles that I analyzed (typically approximately 15,000 particles, which is what 12 photos on a standard white sheet of paper gets you).
Notice how the 16 pounds data point seems off from the general trend. I strongly suspect this was caused by me forgetting to set the motor speed to 700 RPM when taking the Burundi sample – leaving the motor speed at the 1000 RPM I used for seasoning would make the particle spread distribution finer on average. This is an effect I also observed with my app, but this will be for another blog post.
One thing that I found particularly interesting is the fact that, even when the particle distribution stabilizes and stops moving to coarser particle sizes, it kept becoming more uniform. I can’t say for sure that this happens on all burrs and all grinders, but this is a good thing ! I found it amusing that I stopped seasoning within 4 pounds of where the shifting of the particle size distribution stopped being detectable with high statistical confidence with my 12 photos. One thing that hit me when I saw this figure is that it resembles a relation that exponentially approaches an asymptote, like a lot of other things in life; another example of such a relation is the concentration of water versus time in an immersion brew.
Grinder Quality Factors
Another interesting relation to look at is how the width of the particle size distribution evolves with seasoning weight. I did this by looking at its standard deviation:
In the figure above, the error bars are similarly based on small number statistics. What we see here is a little different; the distribution initially becomes wider, but then it starts becoming narrower (more uniform). In my experience, particle size distributions that are centered on coarser particle sizes always seem to be wider. This is what led me to define something called the Q-factor (for “quality” factor) in my grind size app, which is simply the ratio of the average particle surface divided by the standard deviation of the particle surface distribution. This ratio seems to be relatively constant across grind sizes (at least in the neighborhood of filter brews), and it also seems to go up with grinder quality. I’ll get back to this in more detail in a future blog post, but here are typical Q-factors that I started compiling for different grinders:
A friend’s Mahlkonig EK43* after aligning with shims: 1.45 ± 0.02
Baratza Forté BG*: 1.53 ± 0.01
My EG-1 with SSP burrs before seasoning: 1.59 ± 0.02
An older EK43* model that another friend carefully aligned: 1.61 ± 0.01
My EG-1 with SSP burrs after seasoning: 1.76 ± 0.02
An asterix indicates a grinder with its original stock burrs.[Update May 17 2019: Stay tuned for a more complete list of Q-factors that will evolve over time and be accessible to Honey Geisha-tier patrons.]
Gathering these data takes a tremendous amount of work, but I’m gradually building up a library of quality factors for different grinders that I managed to get my hands on. My Patreon followers can already access that partial list as I build it up, but I will eventually release it to the public; it will take a while for me to finish this up however.
This led me to think that a more interesting way to look at how my particle distributions evolve through seasoning is to look at their Q-factor versus seasoning weight:
As you can see, the Q-factor didn’t change much at first while the particle distribution shifted to coarser grind sizes (it hovered around ~1.55, similar to a re-aligned EK43), but then it started increasing by quite a lot.
Eventually, I will map out precise particle size distributions for several different grind sizes with my fully seasoned EG-1. This will allow me to compare each of the particle size distribution above with a fully seasoned distribution at the same average grind size, and thus to say more precisely how the distribution narrowed versus seasoning weight, without having to make the assumption that the Q-factor is perfectly independent of grind size. But this will also take many more seasons of The Office 🙂
More seasoning and an Interesting Observation
When I left my friend’s roaster place after having seasoned the EG-1 with 24 pounds of coffee, I grabbed a bit more of his Burundi and put it in a sealed opaque bag with a 1-way valve and an oxygen absorbing pad (see my other blog post about keeping your coffee fresh for why this is good practice). I did this with the plan to eventually season the grinder a bit more with a 4 pounds bag of bad coffee I had at home. It took me 23 days (and 3.5 pounds of filter pour over coffee that I actually drank) before I had the time to do so. Fortunately, I had the good idea to take a sample before this additional seasoning, as well as after. The effect of this additional seasoning on the particle size distribution was very small, as expected:
There is one thing that really surprised me however; if I compared the grind size distribution right before seasoning again to that right after my first seasoning, it actually became much finer and slightly wider, as you can see in this next figure !
It is highly unlikely that this was caused by the additional 3.5 pounds of coffee that I ground at filter size, because (1) grinding this coarse has a much smaller effect on breaking in the burrs; and (2) this goes exactly the other way than what seasoning does (as we saw above, it makes the particle distribution coarser and narrower, not finer and wider !).
My best hypothesis for what happens here is this: I think that the coffee beans de-gassed and dried as they aged, and the cellulose structure of the beans may also have weakened form the aging. All of these effects will make it easier for the beans to shatter, which will produce more fines, therefore shifting the particle distribution to finer average sizes and widening it. This is exactly what happens with decaffeinated coffee, which requires grinding at slightly larger grind sizes than regular coffee. This will be the subject of a different post, but some extensive blind-tasting dialing in had me select an optimal grind size of 7.5 (375 micron burr spacing) for Heart‘s Colombian decaffeinated coffee, whereas I selected 7.0 (350 micron burr spacing) for several different caffeinated beans; you can also see this nice coffee tip of the day from Scott Rao about brewing decaffeinated coffee.
I found this possible explanation so interesting that I plan to do more experimentation about it, to determine exactly how particle size distributions shift with aging. Imaging knowing exactly how coarser your optimal grind will change versus the age of your coffee, without needing to dial in again. I would definitely love that !
Disclaimer: Doug Weber generously offered me the SSP Ultra low fines burrs when I bought the EG-1 (under no obligations). I decided to get this grinder based on my friend Mitch’s recommendation and my own research on available grinders, and I receive no benefits from Lyn Weber.