Chemicals, Controversy, and the Precautionary Principle
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I- The Precautionary Principle
Wikipedia’s article on the precautionary principle opens by describing it as:
...a broad epistemological, philosophical and legal approach to innovations with potential for causing harm when extensive scientific knowledge on the matter is lacking. It emphasizes caution, pausing and review before leaping into new innovations that may prove disastrous.
On its face this sounds like an ideal approach to new technologies and other forms of progress. As I have continually said in this space we’ve decided to do a lot of things which haven’t been done before. And these endeavors carry with them the potential for significant risk.
There’s a related metaphor from Nick Bostrom I’ve used a couple of times in this space, that technological progress is like a game of blindly drawing balls from a bag. Each new technology is a different ball, some are white and represent technology which is obviously beneficial, and some end up being dark grey—technology which has the potential for great harm. If we ever draw a pure black technology then the harm is so great it ends the game, and humanity has lost. With this metaphor in mind it would seem only prudent to pause before we draw these balls, and, once drawn, to exercise caution while we’re figuring out what color the ball is.
Certainly Nassim Nicholas Taleb, who has also appeared a lot in this space, is a big fan of the precautionary principle. Among other places, he has referenced it in his fight against genetically modified crops, with his primary concern being the fragility introduced by monocultures. His definition is even more extreme than Wikipedia’s:
The precautionary principle (PP) states that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety.
“Scientific near-certainty” is a pretty high bar. Would that be 90% certain? 95%? 99%? That seems like it would be pretty onerous. Though to be fair he couples this extreme requirement for certainty with a presumed scale of harm in a way the Wikipedia definition doesn’t. He speaks of general health and the environment globally. But what are we to make of the phrase “suspected risk”. Certainly there has to be some threshold there, probably most innovations are suspected of being risky by someone somewhere. So I’m not sure that’s very limiting, and if it is limiting I’m not sure it should be. How many people suspected that social media would be dangerous. Lots of people suspect it now, but who looked at “TheFacebook” when it was still only accepting college students and said “This site will eventually swing presidential elections and result in the worst polarization since the Civil War.” My guess is nobody.
Beyond the questions I’ve brought up, there are more significant objections to the precautionary principle. The Wikipedia intro goes on to say:
Critics argue that it is vague, self-cancelling, unscientific and an obstacle to progress.
The idea of it impeding progress is especially relevant because I have also talked extensively in this space, particularly recently, about the smothering effects of regulation on things like nuclear power. I also had a whole post on how the safety knob has been turned to 11, where I discussed how vaccines were being taken out of circulation out of an “abundance of caution”, caution that, on net, was almost certainly killing more people than it was saving. But Taleb’s definition of the precautionary principle would appear to recommend the same caution I was decrying, that before doing anything potentially risky we should have “near-certainty about its safety”.
(This is not to imply that Taleb was one of those who advocated for vaccine suspension, or felt the vaccines were released prematurely. I don’t think he did either, but I haven’t looked into it deeply.)
If you don’t like the vaccine example, I’ve also spent a lot of time talking about the regulations slowing down adoption of carbon-free nuclear power. But if you asked someone for the reason behind those regulations they might also reference the precautionary principle. So am I just a hypocrite, in favor of the precautionary principle when it’s applied to things I don’t like and not in favor of it when it slows down the things I do like? Or is there some way to thread this needle? What methodology can we use, what standard can we apply, to know when to be careful and when to be bold?
As I mentioned in my book review post at the beginning of the month I recently finished Count Down: How Our Modern World Is Threatening Sperm Counts, Altering Male and Female Reproductive Development, and Imperiling the Future of the Human Race by Shanna H. Swan, which made the case that we are suffering from a crisis of chemically induced infertility. At the same time I became very engrossed in a series of posts over at Slime Mold Time Mold (SMTM), which made essentially the same case, except with respect to obesity rather than infertility.
I understand that there is a type of person who spends a lot of time being worried about “Toxins!” And in many cases this worry comes across less as a specific complaint...against a particular chemical...backed by science, and more of a generalized inchoate condemnation of modernity. But when you have two groups independently making claims about the negative effects of increased levels of specific chemicals in the environment, with evidence tied to those chemicals, that seems like something else. Something that deserves a closer look. The question is: what part deserves a closer look?
Most people want a closer look at the evidence. From my perspective there seems to be a lot of it. SMTM has come up with several candidate “chemicals”: livestock antibiotics, Per- and polyfluoroalkyl substances (PFAS), lithium, and glyphosate so far. The series is still ongoing and there is at least one more candidate to come, perhaps more. Swan’s list is somewhat less structured, but it includes, at a minimum, phthalates, BPA, flame retardants, and pesticides. In particular she’s looking for anything that might disrupt endocrine function. Having identified the culprits your next step would be taking a closer look at the evidence connecting them to the supposed harm.
Starting with SMTM, they have individual posts dedicated to each of their candidates. In these posts they do a great job of walking through what evidence there is and pointing out where they wish there was better evidence. And even pointing out when they think a particular chemical is unlikely to be associated with the obesity epidemic, as was the case with glyphosate. For what it might look like when they believe there is a connection, let’s take lithium as an example. They would love to be able to tell you how much lithium is in the groundwater, and how much lithium we’re exposed to, but neither thing has been tracked. They can however point to a huge increase in lithium production, going from essentially zero in 1950 to 25,000 metric tons in 2007 (when the graph ends). They can also provide data showing that people who take lithium therapeutically nearly always gain weight, with about 70% gaining significant weight. Finally they point out that places which are known to have lots of obesity, for example Chile and Argentina, which are the most obese countries in South America (each has an obesity rate of 28%), are also two of the biggest exporters of lithium in the world.
In Count Down the evidence is a bit more scattered, and Swan is not as good at pointing out where she wishes there were more evidence, but there are numerous sections like the following:
Studies have shown that young men with higher levels of phthalate metabolites...have poorer sperm motility and morphology. This is bad news, since higher levels of phthalate metabolites also are associated with increased sperm apoptosis—a term for what is essentially cellular suicide. It’s safe to assume that no man wants to hear that his sperm are self-destructing.
Phthalates are bad news for women’s ovaries, too. High levels of phthalate exposure have been linked with anovulation (when ovaries don’t release an egg during a menstrual cycle) and polycystic ovary syndrome (PCOS), a hormonal disorder involving abnormal ovarian function and elevated levels of androgens.
The sort of things I just went through are where and how most people would take a closer look. Such an approach is designed as a way to increase certainty, in one direction or another but for nearly everyone engaged in this approach, it’s entirely academic. One person could look at the evidence and decide that it’s compelling, another could look at it and decide they still prefer the supernormal stimuli explanation for the obesity epidemic. But in both cases neither person is very likely to have the ability to change the entire course of capitalism and mitigate these harms at a national or global level. In fact, regardless of the conclusion someone reaches in their investigation, it could even be difficult to change these things at the personal level, given how ubiquitous the problems are.
I too “took” that same “look”. I found both the SMTM and the Count Down arguments to be compelling, but to move the debate from the academic to the practical we have to discuss what I would do if I was somehow made dictator of the world (truly a scary thought). Do I find the arguments compelling enough that in this position I would immediately ban all of these chemicals using my dictatorial powers? Probably not, and the reasons would presumably be obvious. Reading one book and one blog post series is definitely not enough information for me to truly understand the harms and even if it was, I have no sense of the benefits provided by these chemicals. What kind of trade-offs would I be making if I banned these chemicals? In attempting to rectify the infertility and obesity problems, what other problems might I introduce? Beyond this there are issues of logistics, public opinion, potential backlash, and of course the general problems associated with exercising power in a dictatorial fashion.
Conversely doing nothing doesn’t seem appropriate either, at a minimum these issues would appear to deserve more study. But is that all we should do? Increase our data collection, so that in 10 years when SMTM does an update they can tell you how much lithium is in the groundwater? But otherwise report that nothing else has been done? That also seems insufficient.
There is a lot of space between data collection and a complete dictatorial ban, and somewhere in there is the ideal set of actions. This is the part I want to take a closer look at, not the evidence. The evidence is never going to be such that we can declare that these chemicals have no potential to cause harm and we’re definitely not going to get to Taleb’s standard of “near-certainty”. In fact at this point I would argue that fighting over the evidence is a distraction. That if the precautionary principle is to have any utility, this is a situation where it should be useful. But what that might be is not entirely clear. There is still the trade-off I mentioned in the beginning, between the problems we fear we will cause with technology and the problems we hope to solve with technology.
This is a difficult problem, and I’m just a lowly blogger. Also despite the fact that this is an “essay”, I’m still mostly thinking out loud (see my last post for a deeper discussion of what I mean.) But I’ve found that one of the best ways to think through a problem is to look at examples, so let’s try that.
III- Silent Spring
Silent Spring, by Rachel Carson, was published in 1962 and while it’s debatable whether it started the environmental movement, it definitely turbocharged it. For those who might somehow be unfamiliar with the book, its main focus was a claim that pesticides were causing widespread environmental damage. Carson took particular aim at DDT which was largely used for mosquito abatement, this abatement was very important because of the mosquitos role in transmitting malaria. Her best known claim is that DDT thinned the shells of eggs. This resulted in birds being unable to incubate those eggs. And this led to a massive decline in the population of these birds. As I recall she singled out bald eagles as a species that was especially endangered.
Viewed from the standpoint of the precautionary principle, Silent Spring could be seen as a notice, or perhaps it was just a strong reminder. We have never had any way of knowing in advance what the environmental effects of widespread chemical use would be. Nor is it unreasonable to default to the assumption that they would be harmful. These chemicals could decimate bird populations. They could cause obesity and infertility. They could cause a host of other things we’ve yet to detect. And they could cause none of those things. But again it’s impossible to know in advance, and it’s even difficult to know that now.
As I said Silent Spring put the world on notice. Before that perhaps we shouldn’t blame people for not being concerned about man made chemicals being dumped into the environment. But after it was published, such lack of concern is less excusable. Rather it seems more reasonable to assume, based on the attention it received, that some form of the precautionary principle should have kicked in. But what form should that have taken? Certainly now that we’re also seeing evidence that chemicals cause obesity and infertility, we imagine that it should have taken a fairly broad form. If nothing else it would be nice to have more data about these things than we currently have.
Beyond that, what should the invocation of the precautionary principle have entailed? We have a “when” for that invocation, and a sense that it should have been broader, but what else? It’s easy to say we should have banned DDT immediately as soon as Carson brought it to our attention, but, as mentioned, it was mostly being used to fight malaria. Malaria kills hundreds of thousands of people every year, mostly in Africa, mostly below the age of 5. Since large-scale use of DDT was restricted in 2004, at least 11 million people have died of malaria. I couldn’t find numbers going all the way back to 1962, but even a very conservative estimate of DDT’s impact on the spread and transmission of malaria gives us an impact of millions of lives. Despite this number I feel confident in saying that on balance restricting the use of DDT in 2004 was a good thing, mosquitos were developing resistance, and at this point it’s hard to find anyone defending widespread use of DDT. Though to be clear, in 2004 the debate still raged. Back then even the New York Times was publishing articles titled, “What the World Needs Now is DDT”.
This brings up the legitimate question, would it have been possible to ban DDT any sooner? And when we consider the millions of deaths would it have been wise to do it any sooner? If we agree that the 2004 ban was a good thing would it have been a good thing in 1994 or 1984, or if we had banned it worldwide in 1974 shortly after it was banned in the U.S.? Given the number of malaria deaths I suspect not, but as you can see it’s a difficult question. Also we have thus far only been talking about malaria, what about other chemicals we’ve been pumping into the environment? We have a sense that we should have taken more precautions, but as we see from the example it’s still not entirely clear what those precautions should have been.
As something of an aside before we move on, looking into this topic not only involved a lot of research about malaria, but also the history of environmentalism, green parties, and antiwar activism. Some of which seems worth including.
As far as malaria goes, I thought this article from the Yale School of the Environment was a pretty good summation. It sets out to answer two questions:
[W]hat actually happened with DDT? And why is malaria, which seemed to be en route to eradication in the 1950s, still killing 584,000 people a year?
The answer to the latter question is the more interesting one, and it seems to boil down to “less-developed countries don’t have sufficiently non-corrupt governments which can successfully execute on public health initiatives.”
As to the rest of it, environmentalism and everything adjacent, I quickly realized that I was well outside even my pretended areas of expertise. As such I am indebted to my friend Stuart Parker and his podcast series, A History of North American Green Politics: An Insider View. I have mentioned him before in this space, but never by name. I didn’t want him to be tarred by association with me, on top of all the other tarring that he’s had to endure. But I really enjoyed that series, there is some great stuff in there. Also in this case I’m particularly indebted because my ignorance was so deep. Accordingly I wanted to at least make sure he gets credit. And to the extent I have any influence with you, I would recommend that you give it a listen.
I can’t really do it justice, but the history of environmentalism, like so many other things, is horribly complex, and it brought home to me again how complicated it is to get anything done. Everything you might want to do gets tied up in the larger and more narrow political narrative. (Environmentalism frequently succeeded or failed based on how it could be deployed as a weapon in the cold war.) On top of that people have a limited ability to focus, even if you’re working in an area they care about. Add to that infighting, tactics, personalities, and priorities and you can see it’s difficult to even get agreement as to what should be done. But if by some miracle you can get a broad agreement internally you still have to contend with external opposition. Environmentalism has always had a whole host of enemies. Even if these enemies merely thought that the trade-offs went the other way.
Out of all of this we can see that in addition to the questions of “When?” and “What?” we need to add the question of “How?” We can decide it’s time to be cautious, we can decide what that caution should entail, but we still have to enact that caution in some concrete fashion.
This example seems to have given us more questions than answers. I don’t think the second example is going to be any better, but let’s proceed anyway.
IV- Gender Dysphoria and Same Sex Attraction
I debated making this section into its own post, so I could cordon it off, given how controversial the topic is. But if you’re going to examine an issue you really need to consider it from every angle and at every level of difficulty. I would say that the DDT example would be considered easy mode. We’ve known about it for a long time. We took steps. We can imagine that the steps we took should have been more extreme and sooner, but it’s also possible to argue that it went as well as it could have given the competing interests, the various tradeoffs in human lives and environmental damage, and of course the political reality.
Chemically induced infertility and obesity might be this subject at a medium level of difficulty. It’s only now entering mainstream awareness, even though it might have been going on for decades. (Swan makes the claim that chemically induced infertility is where global warming was 40 years ago.) Those who profit from these chemicals are deeply entrenched and the public have long ago been persuaded to other explanations for the phenomenon, making them particularly difficult to persuade. This means that there is a significant contingent already dedicated to defending the status quo, with only a very small contingent in favor of overturning it, or at least examining it. Furthermore the evidence you might use to change that imbalance is interesting, but certainly not ironclad. On the other hand the issue does have a few things going for it. For one thing it hasn’t yet become horribly partisan. Nearly everyone agrees that infertility and obesity are bad things. You could imagine that a narrowly crafted bill banning or restricting certain chemicals might even receive bipartisan support. Of course as battle lines are drawn things would certainly change, but that’s the case with everything at this point.
The idea that chemicals may be causing an increase in gender dysphoria and same sex attraction (SSA) is definitely hard mode. The subject is already a political and cultural minefield where reasonable discussion is impossible. And while I don’t think the evidence for this connection is any weaker than the connection between chemicals and infertility, it’s hard to imagine it not being scrutinized a hundred times more closely. And the biggest factor of all, those afflicted by infertility or obesity largely desire to be rid of the condition and consider it an affliction, while many who experience gender dysphoria and SSA consider it part of their identity, and violently reject any attempts to pathologize it. It’s hard to tell whether this contingent is the more numerous, but they are certainly the loudest.
Of course, the argument that some amount of SSA and gender dysphoria can be explained by environmental chemicals definitely counts as pathologizing the condition. Once again I think arguing about the evidence can end up being a distraction, because there’s no amount that is going to be convincing to all parties. And if we’re working on the basis of the precautionary principle, we’re really just looking for enough evidence to suspect risk, or (in the case of Taleb’s definition) rule out a “near-certainty” of safety. To that end I will spend some space laying out the case, but of course if you want to go deeper you should read the book:
In a 2019 article in Psychology Today, Robert Hedaya, MD, a clinical professor of psychiatry at the Georgetown University School of Medicine, wrote, “It is nothing short of astounding that after hundreds of thousands of years of human history, the fundamental facts of human gender are becoming blurry. There are many reasons for this, but one, which I have not seen discussed as a likely cause, is the influence of endocrine disrupting chemicals (EDCs).”
Many other clinicians and researchers are wondering about this, too. The question of whether chemicals in our midst are affecting gender identity is a bit like the metaphorical elephant in the room—obvious and significant but uncomfortable and difficult to address.
Swan goes on to list several mechanisms through which this might happen, and studies that show correlations between chemical exposure and gender development. She also has a section on rapid onset gender dysphoria, which covers much the same territory as Irreversible Damage. (Which I talked about in a previous post.) Also, I should mention that I put forth the theory that environmental chemicals might be causing the rise in gender dysphoria all the way back in 2018, as one of seven possibilities for the increase. So in some sense I was ahead of the curve.
As far as SSA, Swan spends less time on this, though she does make mention of the usual evidence from animals.
Meanwhile, some environmental contaminants have been found to alter the mating and reproductive behavior of certain species. We’ve seen alterations in courtship and pairing behavior in white ibises that were exposed to methylmercury, in Florida. One study found a significant increase in homosexuality in male ibises that were exposed to methylmercury, a result the researchers attribute to a demasculinizing pattern of estrogen and testosterone expression in the males; sexual behavior in birds (as in humans) is strongly influenced by circulating levels of steroid hormones including testosterone.
Again the evidence is suggestive, but inconclusive, but to repeat my point I’m not trying to reach a conclusion. What I want to know is what precautions do we take when there’s suspicion of harm and the evidence is incomplete? It’s difficult enough to act when the evidence is overwhelming (see the global warming issue, and also all previous discussions about nuclear power.) But what possible precautions can we take on an issue like gender dysphoria where the harms are hotly disputed, it’s right in the middle of a culture war, and the evidence is never going to be ironclad?
This post has gone on longer than I intended, so it might be worthwhile to briefly review what we’re trying to do here. One of the best ways to look at the situation is using the analogy offered by Nick Bostrom. We’re drawing balls from the bag of technology. Some are white and beneficial, some are gray and harmful. If we ever draw a black ball the game is over and we’ve lost.
As to the last point, I am not claiming that any of the things we’ve discussed represents a black ball. Rather I think something else is going on, something which Bostrom doesn’t consider in his original analogy, rather it’s something I came up with as an addition to his analogy: some of the balls will get darker after being drawn. Initially DDT’s effect on malaria transmitting mosquitoes seemed nothing short of miraculous. And plastics and other chemicals have been put to millions of uses in nearly everything. It’s only in the intervening years that DDT was shown to cause deep ecological harm, and plastics and other chemicals are now suspected to be causing infertility and obesity.
So, how are we supposed to handle the possibility that the “balls” of technology may change color? That something which initially seemed entirely beneficial will end up having profound, but unpredicted harms? Obviously this is a difficult topic, made more difficult by the fact that nearly any solution you can imagine would impact beneficial technologies at least as much as the harmful ones. That said, I think there are some principles that could be useful as we move forward. Clearly there is no simple solution which can be applied in all cases—something obvious and straightforward. We can’t suddenly stop introducing new technologies, nor can we unwind the last few decades of technology. (Which is what would be required to be certain of reversing the effects I’ve mentioned above.) But rather each technology requires precautions carefully crafted to the specific nature of the technology.
The first and most obvious principle is that of trade-offs. None of the things we’re considering have zero benefits and neither do any of them have zero harms. Whether it’s chemicals or nuclear power or vaccines everything has advantages and disadvantages. I have argued that the downsides of vaccines are vastly outweighed by its benefits, and I maintain a similar position when it comes to nuclear power, though the case is not quite so clear. When it comes to chemicals, the situation is even more complicated, but to have any chance of making a decision we need to know what sort of decision we’re making, and which benefits we’re foregoing in order to prevent which harms.
This takes us to the second principle. We need to have the data necessary to make these decisions. The SMTM guys would have had a much easier time making their case (or being refuted) if data collection had been better. As one example of many from their posts:
Glyphosate was patented in 1971 and first sold in 1974, but the FDA didn’t test for glyphosate in food until 2016, which seems pretty weird.
I am not an expert on which sorts of data are already being collected, who’s collecting them, what sort of costs are associated with the collection etc. But I have a hard time imagining that any reasonable level of data collection would be more expensive than trying to rip a harmful technology out of society after it’s spent decades putting down roots.
Of course this is yet another principle: Earlier is better. The sooner we can detect possible harms the easier and less complicated it is to deal with them. Lithium extraction has been going on for decades, but the oldest paper I could find linking it to obesity is from 2018. Presumably we might have been able to take more effective precautions if we had known about this link before lithium took on it’s critical role in the modern world, most notably in the form of lithium ion batteries.
It should be pointed out that the only way we can do all of these things is if we establish awareness of suspected harms in the first place. We’re unlikely to collect data on something when there’s no suspicion of risk. Or if the suspicion of risk has not risen to become part of the awareness of those empowered to collect data. That, more than anything else, is the point of this post, and of my blogging in general. Convincing people of some particular harm is secondary to making people aware of its potential for harm in the first place.
I am well aware that awareness can easily morph from familiarity into fear. To a degree that’s what I think happened with nuclear power. Preventing this from happening presents one of the greatest difficulties to the whole endeavor. One where I don’t think there’s a good answer. But I will offer up the somewhat counterintuitive opinion that the more potential harms we identify the better it will be. I think if people understand that nearly everything has the potential for harm, that this knowledge might help them not to overreact when some new harm is added to their already long list.
Thus far what we have mostly described is a process of observation not of intervention. While one assumes that intervention will ultimately be necessary, our usual tactic for such interventions is to enact them at the highest level possible. International treaties, federal regulations, etc. This results in interventions which are both crude, and ineffective, if not outright harmful. A great example of this would be environmental impact statements, which seem to be hated by just about everyone.
Here we arrive at what I consider the most important principle of all. The principle of scale. I’ve talked about scale before, and in a similar context, but in the limited space I have remaining I’d like to approach it from a different angle.
One of the things that jumped out to me as I was reading both Count Down and the SMTM stuff was how useful it was for their endeavors to have groups which provided natural experiments. Groups which had a greater than average exposure to the chemicals in question, or happened to have entirely avoided it either through chance, some system of belief, or a different regulatory system. It’s helpful to have lots of different people trying lots of different things.
This idea, depending on its context, can be labeled federalism, subsidiarity, or libertarianism. But in another sense it’s also a religious issue, nor is it certain that the two don’t bleed together. People offer religious objections to vaccines, could they go the opposite way and assert that their religion demands that they use nuclear power? As another example, what if there was a religion which demanded that their food be free of certain chemicals? Considering the wide availability of kosher and halal food, this tactic seems worth pursuing. I understand that some people already do this with organic food, and to an extent there is an associated ideology. Is there any reason not to lean into this?
The point I’m trying to make is not that we should encourage religions to do such things but rather we shouldn’t discourage them. If someone wants to try something, like intentionally infecting themselves with COVID as part of a human challenge trial. Whatever they want to label it—and it’s possible the most effective label would be the religious label—we should allow it.
In this way we can do all the things I mentioned—assess trade-offs, gather data, raise awareness—at a scale that limits the harm. Of course this is not to say that there is no harm. I realize this opens the door to having even more people refuse to get vaccinated. I disagree with people who are opposed to getting vaccinated and I understand how having such unvaccinated people endangers the rest of the population. And I realize this proposal might make it easier to refuse a vaccine. I also understand people who are opposed to nuclear power, despite my strong advocacy of it. They believe they will suffer the harmful effects of radiation despite not being part of the community that uses nuclear power just as vaccinated people think they are more likely to get breakthrough COVID despite not being part of the anti-vax community. Unfortunately one of the few ways available to us to figure out whether a technology is dangerous or not is for some people to use it and for some people not to use it.
It would be nice if we could instantly discern whether a technology was going to be beneficial or harmful, on net, but we can’t. And I think our record of deciding such a thing in one fell swoop for all time and all people shows that we’re wrong at least as often as we’re right, and it wouldn’t surprise me if we’re actually wrong more often.
If you take nothing else from this very long post, it should be this. The precautionary principle is important, and as new technologies come along and as the harms of old technologies become more apparent we need to figure out some way of being more cautious—to neither blindly embrace nor impulsively reject technology. We need to be brave and careful. We need to gather data, but also act on hunches. The dangers are subtle and if we’re going to survive them we need cleverness equal to this subtlety. Put simply we need to look before we leap.
I’m not sure if this is my longest post, I’m too lazy to check. If it’s not it’s close. If you made it this far let me know. I’ll randomly select one of you for a $20 Amazon gift card. Let’s be honest you earned it. If alternatively you want to fund the gift card consider donating.