Is “All Natural” Really “All Good”?

February 7, 2017 - 17 minutes read

The presumed safety of natural substances compared to synthetic (or artificial) food ingredients, has created a rush to “all natural” and “clean labels” that has become a tsunami over the last decade.  This clamor for naturals is to avoid the use of multi-syllabic chemical names on the label, but to also in a clever and indirect way assure consumers they are getting something safer than an artificial ingredient because it was presumably vetted by nature.  Or, possibly the equally fallacious claim that “we evolved” consuming the natural, “so how could it be harmful”? However, is this belief that natural is inherently safer that artificial actually true? Putting aside urban myths and unicorns for the moment, could the most realistic theory of “vetted by nature” include eliminating (i.e., killing off) the non-tolerant members of the species that consumed the natural, leaving only the hardiest of individuals?  Is this food-based Darwinism?

I would tend to believe that from the viewpoint of a hunter-gatherer (with a paucity of dining choices), anything that did not appear repugnant, tasted good (i.e., not bitter with potentially dangerous alkaloids), sustained me (i.e., nutritious) and, was not immediately (acutely) toxic, was probably a good thing to eat – at least most of the time.  Our hunter-gatherer ancestor would likely not eat green apples or three-day-old fish more than a couple of times and, he learned to limit his intake of certain items, which could have toxic effects if consumed in high amounts (e.g., licorice), to process certain foods that would be toxic in their native state (e.g., cassava root) and to eat only rhubarb stems and not the leaves.  Hunter-gatherers with disaccharide, gluten or lactose intolerance, favism or phenylketonuria were just out of luck, even though the basis of their intolerance was consumption of a natural.  Nature regarded these hunter-gatherers with intolerance or metabolic anomalies as residing at the tail of the bell curve and dealt with them rather harshly ─ either they learned to be careful in their selection of foods or they did not live to pass on their genetic-based challenges.

In the late 19th and early 20th centuries, the idea of naturals as being inherently safe was never much of a front-seat issue with most consumers, compared to the outrage that synthetic chemicals generated, once they were characterized as poisonous.  Especially influential in the concept of poisonous food additives was the pioneering efforts of Harvey Wiley, whose work on food additives at the turn of the 20th century, eventually resulted in the Pure Food and Drug Act of 1906.  Wiley fervently believed that all added non-natural substances posed a potential risk to public health and none was wholesome (White, 1994). Wiley’s screeds during his government service and later as chief of the laboratories of Good Housekeeping Magazine were, however, mis-directed to some extent, in that his condemnation of “chemicals” was directed at toxic entities such as coal tar-derived artificial colors and preservatives (such as formaldehyde and borax), but these admittedly bad actors were conflated with all chemicals used in food and all were characterized as harmful.  During this period, the Wiley followers even wanted the regulations to refer to all chemicals added to foods as poisons.  At this point in time, the degree of hysteria about artificial chemicals as poisons in food was at its all-time peak among the informed few, but not yet at its all-time high as a wide-spread belief among consumers.

The Delaney Committee hearings on chemicals in food (including pesticides), in the early 1950’s, brought a modicum of reasonableness to the bombastic statements by witnesses on both sides of the issue.  Even the widow of Harvey Wiley was present at the hearings and testified that Congress should prevent the addition of chemicals in food, especially additives that made bread soft.  Organic food champions such as J.I. Rodale (editor of Organic Gardening and Organic Farmer and later, Prevention Magazine), testified to the perils of pesticides and the hazards of additives.  However, given all the hyperbole about chemical additives and colors, natural ingredients received scant attention and were generally regarded as benign.

Eventually, pesticides were teased out of the mix of chemicals considered by the Committee and included in the Pesticides Residues Amendments of 1954, but the difficult question regarding the 2000+ food ingredients remained, not be solved until four years later with the Food Additives Amendment of 1958.  Congress determined that ingredients added to food could either go through (1) an FDA approval process as a “food additive” if the substances were not (2) generally recognized as safe (GRAS). There were one of two criteria to be met for qualification as GRAS (a) if the substance had been in (safe) use prior to January 1, 1958 or if (b) shown to be safe (to the standard of a “reasonable certainty of no harm”) through conventional testing.  However, because of the inevitable parsing of “what is safe” as an added ingredient versus that already present in food, Congress included a section in 21 U.S.C. 342 (in the Act as §402):

A food shall be deemed to be adulterated—(a)(1) If it bears or contains any poisonous or deleterious substance which may render it injurious to health; but in case the substance is not an added substance such food shall not be considered adulterated under this clause if the quantity of such substance in such food does not ordinarily render it injurious to health…

That is, a food is adulterated if a toxic substance is added, but if the toxic substance is not added, but present naturally and does not ordinarily cause injury the food is not adulterated (if the substance did ordinarily cause injury, such as poisonous mushrooms might, it would probably not be called a food anyway). The bottom line – if you add a toxin, the food might be considered adulterated and cannot be sold, but if the toxin is naturally present, the food can be sold.  The assumption is then, that generally all natural, unprocessed “foods” are safe and that any natural foods or extracts could have tolerances put in place for those plants containing undesirable constituents – § 402 makes other accommodations for even the most parsing of readers.

Therefore, §402 was an open invitation to nominate natural substances for regulation.  The bulk of the naturals nominated for inclusion consisted of approximately 1100 natural flavoring substances (i.e., plants and plant parts) and their extracts were considered safe by FDA and included in the Code of Federal Regulations, either as GRAS substances or food additives.  Notably, most flavor ingredients were given food additive status instead of GRAS status because the general recognition requirement could not be met (Lin, 1991). According to a representative of FDA, these designations of GRAS or food additives, was a temporary measure and the FDA was very interested in subjecting these substances to a more thorough review at a later time (Lin, 1991).  There is no indication this review by FDA has ever taken place.

Now then, in July, 2011, (what is now named) the Office of Dietary Supplements, promulgated draft guidelines requiring extensive toxicity testing of new dietary ingredients (NDI), even though many of NDIs were from the same plants whose genus and species were named in the Code of Federal regulations following the 1958 amendment (21CFR172.510 and 21CFR182.10, 182.20, 182.40, et al.) and even those substances reviewed by the Select Committee on GRAS Substances in the early 1980’s (21CFR184 et seq.). Although the 2011 guidelines were eventually withdrawn, new draft guidelines were published in August, 2016,[1] but the testing requirements were generally the same.  Although there was some lip service to exemption from NDI notification for substances already in food, the qualifications for the exemption were extensive and sometimes conflicting.  The question remains however as to why are these safety testing requirements so onerous when the same plants had been approved for food use more than 50 years before? The rationale for extensive testing requirements is revealed in an examination of the first years following the 1958 amendment and to a lesser extent in the NDI draft guidelines.

Immediately following passage of the 1958 Amendment, the flavor companies, who were especially vulnerable to a challenge of back-filling a yawning chasm of data gaps, initiated a survey to determine what flavors were being used ─ the list eventually settled upon for submission to FDA was 1124 (although not all were naturals), but few had any test data to support their safety.  Because the information supporting the safe use of all ingredients was not equal in rigor for each ingredient, the path forward for a claim of safe use was much like a three-legged stool where one leg could support another: (1) a claim of history of use prior to 1958, although documentation supporting consistent historical use of the identical substance was sometimes weak; (2) the claim that exposure to many substances was relatively low as most substances were confined to only a few foods each (e.g., no rosemary extract would be used in candy and no spearmint would be used in gravies) and; (3) the claim that low exposure in itself (industry eventually settling upon an estimated 2 mg for most flavor ingredients as the daily exposure in food) as being de minimis[2]  and not thus requiring analysis for toxins.

The claim of a history of use of the substance appeared to be valid on its face, but as cited in the literature on the subject and the NDI draft guidelines, the source of a natural substance is not always the same, nor has it been historically; that is, while some producers may obtain a natural from one species, others may use another species or even a different part of the plant that may have a lower yield, but is a more economical source – the reason for the more recent demand in the NDI draft guidelines to not only cite genus and species, but variety as well.  Also, the time of harvest (season) or even the time of day will produce a different combination of constituents, as will the geographic location of the plant (which considers soil minerals, rainfall, etc.) and, even the means of harvesting and storage (which may give rise to production of toxic constituents such as seen in potatoes, celery and other crops that are mishandled during harvest and storage).  Additionally, the substance in question may come from a totally different source; for example, the greatest source of resveratrol is not grapes as expected, but from Japanese knotweed (Fallopia japonica) – this point also illustrates the fact that while the constituent may have had a history of use when taken from a specific plant, new sources of the constituent have been developed from plants with no history of consumption.  A question that specifically comes up in the new draft guidelines is the fact that there have been significant manufacturing changes in production of plant constituents, such as critical CO2 extraction, which in some plants likely yields a significantly different palette of extract.  All of these variables embody the possibility of the plant containing toxins.  Further, the commercial quality of a natural extract is assessed by its content of the desired constituent, not other substances (including toxins).

The claim that exposure to consumers is low because most flavors are only used in certain types of foods and not all foods, has become an outdated assumption. This claim is belied by the fact that the use of nutmeg is not just in confections and baked goods, but is a prime contributor to the meats (e.g., ground meat products including hot dogs and meat balls), spinach dips and other dishes; cinnamon and orange have similar widespread uses.  However, the high concentration uses not fully appreciated at the time this theory of low exposure was hypothesized, included mouthwash, toothpaste and breath mints (not considered foods, but either drugs or cosmetics),  tobacco and now, non-flavor uses such as antioxidants in food (e.g., rosemary) and large amounts of naturals and their extracts used as dietary supplement ingredients. Nor is the claim of self-limiting a valid claim, as the extracts of some substances no longer contain the constituents producing the pungent flavor notes preventing over-exposure.

As discussed, the argument for limited exposure is invalidated as a result of a number of products that were not considered at the time many ingredients were incorporated into the CFR and, with this invalidation goes the argument that analysis for potential toxins be conducted as the standard for de minimis has been lowered to 0.5 ppb in the diet (21CFR170.39 – The threshold of regulation).

Therefore the rationale for testing and risk assessment of natural ingredients becomes obvious and should be conducted according to the same standards as artificial (synthetic) ingredients (including single chemical fermentation products).




[2] From the law: “deminimis non curat lex” – “The law does not concern itself with trifles”.

  • Lin LJ (1991). Interpretation of GRAS criteria. Food Drug Cosmetic Law Journal 46:877-844.
  • White (Junod) SR (1994). Chemistry and controversy: regulating the use of chemicals in foods, 1883-1959. University Microfilms International, Ann Arbor MI. 416 pp.


Tags: ,