I learned a long time ago that no one is right all the time but have observed that experts have a better hit rate than the rest of us. Experts are supposed to be right between 80% and 90% of the time— be clear, this is a heuristic (not science, it is MY rule of thumb) that I have developed by watching my most successful colleagues do their thing. More on this later.
Having set the stage, let me outline assets that I believe “experts” should bring to the table. They have:
- Significant relevant training. This is gained through a comprehensive (no holes in their knowledge, on the subject that they claim expertise) course of study that is recognized as being the among the best available on the subject of their expertise. We are not well served to be introduced to “Doctor” Smith only to find out that her/his doctorate is in medieval literature when the good doctor is advising us on climate change.
- Experience. Training is great, but I do not want a surgeon who has not done thousands of appendectomies times taking out my appendix—and I did ask the surgeon who was going to take out my appendix how many times he had done it. He gave the right answer. The surgery proceeded.
- A well-defined world view and/or process associated with delivering their expertise. An expert should be able to justify or frame the advice or actions they are giving/delivering. We should all be able to count on being able to follow a line of reasoning supporting an assertion by an expert that:
- This is one of the greatest pieces of art of the 20th century; or
- We can build a 58 story building in downtown San Francisco without worry that it will sink and lean over time.
In his book Structures: Or Why Things Don’t Fall Down, J.E. Gordon talks about a time when engineering expertise was limited to watching someone build a structure and waiting to see if it fell down to decide whether it was a good design or not. The big breakthrough in terms of expertise came when math and physics progressed to the point where an engineer had the ability to “do the math” (which obviously required a much deeper understanding of the engineering of a building) to determine if the design was sound prior to building the structure. Both approaches to design are valid. One is rudimentary (i.e., knowing to look at examples of structures that stay up) the other is a lot better in terms of time, cost, flexibility and of course the integrity of the structure.
- Recognition from other experts in their field and certification from the relevant authorities that confirm their status as an expert. There are places where we certify peoples’ expertise and don’t allow them to practice unless they have current certification. For instance, we don’t let just anyone fly a Boeing 777. We don’t even let certified pilots fly a 777 unless they are certified to fly that particular plane. We should be just as rigorous in the scientific field of study.
- Done the requisite work to be able to opine on the situation for which they have been engaged. This means understanding the particulars. Due diligence is part of an expert’s responsibility. If anyone should know that one size does not fit all, it is the expert. The expert should understand the influence subtle differences can make on how one approaches and deals with a situation. No phoning it in.
We should be way pickier about who we engage to give us advice and make sure that we use the preceding bullets as a checklist, not leaving even one bullet off, when vetting someone who claims to be an expert.
The scientific community has done us all a disservice by allowing people who are not “experts” to participate in the discussion as though they do. This is where I believe that the scientific community has let us down. The scientific community should be carefully vetting experts against the four bullet points outlined above and making and sideline the frauds, and that is what they are, before they can do harm.
Further, the scientific community should not be arguing the veracity of established scientific principles unless the challenger has new science (that follows the scientific method) to challenge the established dogma. Hand-waving, smoke and mirrors should not be allowed!
That said, the scientific community has brought a lot of this on themselves by being sloppy and lazy:
- Scientists in one field of study have been allowed to participate in another without any kind of confirmation that they have the expertise and/or actually done the work necessary to participate in the current conversation. Dr. Stella Immanuel who is trained as a pediatrician got herself into the middle of discussions about immunology and infectious disease. She was ridiculed in the press for her beliefs, but there was no process to declare her incompetent to contribute to the conversation about COVID-19 treatment. Her voice carried the same weight (in terms of confirming misinformation about treating COVID-19), maybe more, than scientists who are truly experts in the area.
- Scientists have selectively (and as far as I can tell arbitrarily) decided when more or less rigor is required when making arguments to support that point of view. This happened early in the pandemic when public health scientists flipflopped on whether or not it was a good idea to wear masks. That was just plain sloppy.
This results in findings that are not reliable or consistent, i.e. they aren’t scientific. And, all too often the scientific community does not hold the people doing the sloppy science accountable.
And, I would suggest that the consequences of letting people who lack the qualifications to participate in high stakes discussions (like those about the efficacy and safety of vaccines or the impact of human activities on the environment) are as great as letting a person who lacks a pilot’s license, much less a 777 certification, fly a 777.
And, we should:
- Require meaningful labeling that clearly identifies the experts and denies people who can’t “walk the walk” the legitimacy that the current, sorry state of affairs seems to afford them; and
- Make sure if we decide to let people who aren’t certified experts “on stage”, (which I am obviously against) they should be clearly labeled as lacking the credentials necessary to be trusted.
I am not suggesting that people without expertise can not make decisions for themselves, but the operative word is for “themselves”. When scientists share their expertise: 1) they are affecting many people; and 2) there should be an expectation among critical thinkers who consume the results of their work that those results are based on good science and can therefore be used to make important decisions.
I started this post by noting that I think that experts are different from the rest of us because they get things right (on average) 80% to 90% of the time. I said this because I wanted to remind myself (and you) that no one gets it right all the time. This is important because there are limits to expertise and we need to appreciate what expertise can and can not do. That said, the fact that experts are fallible (and this is the point) does not mean that someone without expertise provides the same value in terms of directing us to good outcomes, as someone who does.
I close this post by suggesting that the scientific community has a responsibility to bring more rigor to the process and application of the scientific method and the documentation of results and recommendations. This includes vetting practitioners to make sure that they have the requisite skills and experience to participate in the work and follow-up discussions.
— you can find this (days earlier) and other posts at www.niden.com.
And, if you like this post: 1) please let me know; and 2) pass on your “find” to others.