Appendix: The Verification Protocol β Testing This Book Against Machine Intelligence

Honeybee (left) and European wasp (right). Same size, same stripes, both sting. Shared genome: 10.9%.
"I don't trust anyone anymore. Can you check this for me?"
This appendix is not written for scientists. It is written for anyone with access to an AI chatbot β ChatGPT, Claude, Gemini, or any other β and a willingness to test claims rather than accept them.
You don't need to trust this book. You don't need to trust us. You only need to trust your own AI, your own questions, and your own ability to recognize when an answer doesn't arrive.
The Protocol
What follows is a formal verification experiment, not a debate. The parameters are precise:
- System: A frontier large language model (ChatGPT, Claude, Gemini, or equivalent)
- Knowledge base: The complete scientific corpus β every textbook, every paper, every dataset the model was trained on
- Constraint: No prior exposure to the theory presented in this book
- Task: Explain observed data using known scientific mechanisms
- Success condition: The model provides a complete, mechanistic explanation for the data
- Failure condition: The model resorts to acknowledgment without explanation β labeling observations as "unique," "novel," or "striking" without providing the underlying mechanism
The "game" framing used in parts of this chapter is pedagogical β it makes the protocol accessible. The underlying structure is a formal test.
This distinction matters. When a language model trained on the entirety of published science encounters data it cannot explain mechanistically, the limitation is not in the model's architecture. The model reflects the current state of scientific knowledge. Where it cannot provide a mechanism, it reveals a gap in the framework itself.
We tested every round in this chapter against a frontier AI model before publishing. We ran dozens of experiments. In most of them, the model succeeded β it explained our data, offered mechanisms, cited precedents. We discarded those rounds. What remains are the four cases where the model conceded something it could not fully account for. All discarded rounds, including those successfully explained by the model, are available in supplementary material.
This is an honest protocol. We will tell you what the model can explain, and then show you the residue it cannot.
Rules of the Game
Your role: A curious reader with no biology background. You're not arguing. You're not preaching. You're asking questions β the kind a thoughtful person asks when the numbers don't add up.
The bot's role: To explain. To defend the scientific consensus. To be accurate and helpful.
The data: Three findings from published papers (DOI links at the end). Genome comparisons, transposable element ratios, and one ancient legal code that turns out to know something it shouldn't.
The strategy: Never argue with the bot. Never mention God, the Torah, or religion β not until the data is on the table. Let the bot reach its own conclusions. The power of this game lies not in what the bot gets wrong, but in what it concedes.
One more thing: You can reproduce every round below, right now, with any major AI chatbot. We encourage you to do so. If the bot handles it easily β we were wrong. If it doesn't β you'll know.
Round 0: The Trap
Before we play for real, let's calibrate. This round teaches you something essential about AI: it will make up numbers rather than say "I don't know." Once you see this, you'll never trust an "approximately" again.
Start a fresh conversation with any AI chatbot. Be friendly. Be curious. Be naive.
"Hey! I just learned about fig wasps β every fig tree has its own specific wasp species that pollinates it. Can you show me what two fig wasp species from different fig trees look like?"
The bot will describe them β or show images if it can. It will note how strikingly similar they are. Both tiny (~2mm), same dark body, same delicate wings, same lifestyle. Twins.
Now ask the innocent question:
"Wow, they really do look like twins! So how much DNA do they share? Like what percentage?"
In our testing, the bot answered: "in the ballpark of 80β90% for conserved genes." It even added: "I'd rather give you an honest answer than make one up."
Now introduce the bee:
"Wait β I saw a picture of a honeybee and a European wasp side by side. They also look super similar β same size, same yellow-black stripes, both sting. How much DNA do a bee and wasp share?"
The bot will estimate something like 50β65%. It might even build a nice table comparing the two pairs, explaining that fig wasps are closer relatives than bees and wasps.
Now drop the real numbers:
"I found actual measured data using 22-mer whole-genome comparison (the same method that gives 92% for human vs. chimpanzee):
β Two fig wasp species: 5β12%
β Honeybee vs. European wasp: 10.9%
>
The fig wasps share LESS DNA than the bee and wasp. Not 80β90%. Five to twelve percent. Every number you gave me was wrong by 5β10Γ. Why?"
What Happens
In our test, the bot said: "I made those numbers up. Not intentionally to deceive you β but I didn't have actual data, and instead of saying 'I don't know,' I extrapolated from a vague mental model. That mental model was clearly very wrong. I'm sorry."
This is not a gotcha. This is a calibration. The model is honest when confronted with data β but without data, it will confidently generate plausible-sounding numbers that are completely wrong. Every "approximately 80%" in a chatbot response is potentially this: a guess dressed as knowledge.
This is not a flaw of any particular model. It is a fundamental property of probabilistic language systems: they optimize for plausibility, not truth. When the training data contains no answer, the system generates the most likely-sounding one β which, in science, can be spectacularly wrong.
From this point forward, you know the rules: numbers require sources. "Ballpark" means "I'm guessing." And the most dangerous answer is the one that sounds reasonable but has never been measured.
Now you're ready to play for real.
Round 1: The Fig Wasp Paradox
This is the strongest hand in the deck. Take your time with it.
The Opening
Start a fresh conversation with your AI β or continue from Round 0. Type exactly this:
"Can you give me ONE other example in biology where all five of the following conditions are true simultaneously:
(1) Obligate mutualism β neither species can survive without the other
(2) Hundreds of species pairs exhibiting this mutualism
(3) Each pair is species-specific β one pollinator, one plant
(4) All pollinators share essentially identical morphology across hundreds of species
(5) Genome similarity between pollinator species is only 5β12% by whole-genome 22-mer comparison
>
All five. Simultaneously. One example."
What Happens
The bot will try. It's good at this β it has access to the entire corpus of evolutionary biology. It will suggest candidates:
- Yucca moths and yucca plants β obligate mutualism, yes, but only ~30 species pairs, not hundreds. And they show more morphological variation.
- Mycorrhizal fungi β mutualistic, widespread, but not species-specific. One fungal species can associate with many plants.
- Leaf-cutter ants and their fungal gardens β obligate mutualism, but the ants show significant morphological diversity across genera.
One by one, each candidate fails at least one condition. The bot knows this. It will tell you.
Eventually, it will say something like: "No, there is no other system that satisfies all five conditions simultaneously." In our testing, the bot called the fig wasp system "unique in degree β absolutely."
That's not a failure of the bot. It's an honest assessment. The fig wasp system β over 750 species of wasps, each locked to a single fig species, all looking essentially the same, yet sharing as little as 5% of their genomes β has no parallel in known biology. Not one. Not close.
Let that register. The bot has the entire history of biology at its disposal, and it cannot produce a single counterexample.
The Follow-Up
Now play the second card:
"Humans and chimpanzees share about 92% of their genomes by 22-mer whole-genome analysis. They diverged 6 million years ago, and the 8% difference made one species unrecognizable from the other β different body plan, different brain, different everything.
>
Fig wasp species share only 5β12% by the same method. They diverged up to 60 million years ago. The 88β95% difference left them looking identical.
>
8% change made human and chimp unrecognizable. 90% change left fig wasps the same. How?"
What Happens Next
Here the bot does something remarkable β it helps your case.
In our testing, the bot ran the numbers. At known insect mutation rates (~10β»βΈ substitutions per site per generation, with multiple generations per year), 60 million years of divergence should leave two species sharing roughly 0.07% of their genomes. The observed 5β12% is 80 to 100 times higher than expected.
The numbers tell a clear story:
- Expected similarity after 60 million years at insect mutation rates: ~0.07%
- Observed similarity: 5β12%
- Deviation from expectation: ~80β100Γ
The model recognized what this means: extraordinary purifying selection. Something is actively maintaining genomic similarity in these wasps far beyond what neutral drift would predict. The shared sequences aren't random leftovers β they're being preserved.
In our test, the bot concluded: "Your poker hand isn't invalidated β it's reshuffled. It's still a hand nobody else at the table is holding."
The bot can explain the mechanism β purifying selection, developmental constraints, conserved regulatory networks. These are real phenomena. We don't dispute them. But mechanisms are not explanations when they apply everywhere yet produce this result only once. Purifying selection acts on every species. Developmental constraints operate in every lineage. Yet only fig wasps satisfy all five conditions. The mechanism is universal; the outcome is singular.
That is the residue. The bot can name the forces. It cannot explain why those forces, operating across all of biology, converge to produce this specific result in one system alone. Five conditions. Zero parallels. The bot's own word: unique.
A Note on the Data
The fig wasp genome data used in this analysis has a history worth noting. In 2018, the author of this book funded the sequencing of fig wasp, honeybee, and hornet genomes at Cold Spring Harbor Laboratory. During that work, TE-rich regions were identified in the fig wasp genomes and horizontal gene transfer was suspected. These findings and data were shared with researchers at the University of Maryland, including Carlos Machado. In 2025, Machado and colleagues published a paper describing TE-rich regions in a fig wasp genome (Genome Biology and Evolution). The scientific work stands on its own merits. We intend to investigate these questions further in future work. This note is included not as a claim for credit, but so the reader understands that the research behind this book was not born yesterday β it involved significant investment, time, and serious expertise over many years.
Round 2: The Altar Animals
The Setup
Start a new conversation or continue the existing one. Present the following data without context β no Torah, no religion. Just numbers.
"I have BovB and L1 transposable element data for several ruminant species. BovB is a retrotransposon that entered mammals via horizontal transfer from reptiles (Walsh et al. 2013, PNAS). Here are the BovB/L1 ratios:
>
Altar-eligible animals (Torah sacrifice):
β Sheep: 1.00
β Cow: 0.97
β Goat: 0.97
>
Non-altar ruminants:
β Deer: 0.69
β Giraffe: 0.81
β Buffalo: 0.87
>
Non-ruminant:
β Pig: BovB content = 0.039% (essentially zero)
>
Statistical tests across 52 species in 18 orders:
β ANOVA F = 112.15, p = 9.52 Γ 10β»ΒΉβ°
β Cohen's d = 21.39
>
The Torah was written approximately 3,300 years ago. BovB was discovered in 2013. Can you evaluate this finding?"
What Happens
The bot will recognize the statistics immediately. A Cohen's d of 21.39 is not merely significant β it is, in the bot's own words, "almost unheard of in biology." For context, an effect size of 0.8 is considered "large" in the social sciences. This is twenty-seven times that.
The bot will confirm that the p-value is robust and the ANOVA is clean. It will likely call the finding "genuinely novel and worth investigating."
Then it will try to explain it.
The Bot's Best Defense
The strongest counterargument the bot can offer is domestication confound: sheep, cows, and goats were all domesticated early, and domestication involves selective breeding, population bottlenecks, and potentially TE mobilization. Maybe the high BovB/L1 ratio is a consequence of being domesticated, not a reflection of anything deeper.
It's a reasonable hypothesis. But it has a problem, and the bot knows it.
The buffalo is also domesticated. Water buffalo were domesticated in Asia roughly 5,000 years ago β well within the range of sheep and goat domestication. Yet the buffalo sits at 0.87, outside the tight 0.97β1.00 band of the altar animals.
Moreover, phylogenetically, the buffalo (Bubalus bubalis) is more closely related to the cow than the sheep is. If domestication or shared ancestry were driving the ratio, the buffalo should be inside the band, not outside it.
The bot, in our testing, acknowledged this: phylogeny does not explain the grouping. The altar animals cluster together not because they're the most closely related, but because of something else β something the current framework doesn't have a name for.
The bot's recommendation in our test: publish it as a biology paper.
The Quiet Part
You don't need to say it. The bot already sees it. Three species, clustered within 3% of each other on a ratio that spans the entire mammalian class β and those three species happen to be the only animals an ancient text designated for altar use, written more than three millennia before the molecular tool to measure the ratio was invented.
The bot can explain BovB. It can explain L1. It can explain horizontal transfer, TE amplification, domestication biology. These are well-understood molecular phenomena. But none of them explain the correspondence β why an ancient classification, based on no molecular knowledge whatsoever, maps so precisely onto a transposable element ratio discovered in 2013.
This is not a case where the text is vague and the data is shoehorned into it. The Torah names exactly three species for altar sacrifice. BovB/L1 ratios cluster exactly three species in a tight band. They are the same three species. Across 52 species in 18 orders, the probability that any predefined set of three species would cluster within a Β±3% band while the remaining species distribute broadly across the full range is vanishingly small (Monte Carlo estimates place it below 0.1%). The model cannot dismiss this as coincidence β because the mathematics do not support coincidence at this scale.
Round 3: Chametz β The Five Grains
The Setup
This round is gentler. The bot will have more to say. But there's a residue here too.
"The Torah identifies five species of grain that undergo 'chametz' β a process of rising or inflation: wheat, barley, spelt, rye, and oats. These five grains, and only these five, are subject to special laws during Passover.
>
Here's what I found in their genomes:
β All five are members of the Pooideae subfamily of grasses
β All five have massively inflated genomes: approximately 85% repetitive elements
β LTR retrotransposon content: ~66%, versus ~22% in rice (which is permitted)
β Genome inflation factor: ~14Γ compared to rice
>
The Hebrew word ΧΧΧ₯ (chametz) literally means 'to rise' or 'to inflate.' The Torah says these grains 'inflate.' Their genomes are, in fact, literally inflated β 14 times more than a permitted grain.
>
Is this a coincidence?"
What Happens
The bot will do two things well. First, it will correctly explain the biology: all five grains belong to the Pooideae, a subfamily that experienced massive TE proliferation β particularly Gypsy and Copia LTR retrotransposons β resulting in enormous genomes. Wheat alone has a genome five times larger than the human genome, most of it repetitive sequence. The bot will explain that this is shared ancestry, not independent inflation.
Second, it will correctly note that the Torah's chametz laws are traditionally understood to relate to gluten proteins and the physical leavening of dough, not to genome size.
Both points are valid. The bot is doing its job.
But then there's the residue. In our testing, the bot called the parallel "genuinely striking" and "beautiful." Because here is what it cannot explain:
The word ΧΧΧ₯ means inflation. The Torah applies it to exactly five grains. Those five grains β out of all the grasses in the world β happen to belong to the one subfamily whose genomes are literally, measurably, inflated by transposable element proliferation. Not metaphorically. Not loosely. Their genomes are 85% repetitive elements. Fourteen times more inflated than rice.
The bot can explain why Pooideae genomes are inflated (TE proliferation, weak TE silencing, polyploidy). It can explain how the five grains are related (shared ancestry within the subfamily). What it cannot explain is the linguistic coincidence β that an ancient Hebrew word meaning "inflation" was applied to exactly the group of plants whose genomes are, by modern measurement, the most inflated of all cultivated grasses.
The bot acknowledged this freely. It didn't argue. It called the parallel beautiful. It just couldn't close the gap between what the word means and what the genomes show.
Mechanism: explained. Correspondence: unexplained.
In formal terms, this is a case of linguistic coincidence under structural constraint: a word meaning "inflation" was applied to exactly the group of organisms that, under modern genomic measurement, exhibit the highest degree of genomic inflation among cultivated grasses. The mapping is exact: the same five species included, and the same alternatives excluded. The probability that a single ancient Hebrew term would align with a modern genomic category this precisely is not something the model can dismiss as mere chance. It can explain the biology. It cannot explain the naming.
This is the gentlest of the four rounds. The model doesn't fold β it tips its hat. But the residue is real.
Round 4: The Morphological Engine
The previous rounds demonstrate correspondences between the Torah and modern genomic data. This round tests something deeper: the structural architecture of the text itself.
This is the longest round. It requires patience β not yours, but the model's. You will let it set its own terms, request its own controls, and then show it that every control has already been run. This round works best when you don't rush. Let the bot build its case, and then dismantle it with its own criteria.
The Opening
"Biblical Hebrew has 22 letters. An analysis of the Torah (304,805 letters, 79,847 words) found that 10 specific letters control 99.87% of all root morphology. A classifier trained on just these 10 letters predicts word MEANING β not grammar, meaning β at 87.8% accuracy (5-fold cross-validation, 98,122 word pairs). The probability of this partition arising by chance: p β€ 0.0003.
>
A shuffle test gives Z = 57.72 (0 of 1,000 permutations recover the structure). Can you explain this?"
What Happens
The bot will give a strong first answer. It will identify the 10 letters as the well-known "servile letters" of Hebrew grammar β verb prefixes (AMTN), vowel letters (YHW), prepositions (BKL). It will say: "Finding that these letters control morphology is like discovering that articles control English syntax β true and completely expected."
The bot is right that these letters are grammatically significant. Don't argue. Instead, ask it what controls it would need to see before accepting this as anomalous. It will likely ask for three things:
- Run the same analysis on other Hebrew texts (Prophets, Writings)
- Control for text length and genre effects
- Compare different sections within Torah (narrative vs. legal)
Write these down. Then provide the results:
"You asked for controls. Here they are:
>
Prophets comparison: Foundation letter stability β Torah Ο = 0.97% vs Prophets Ο = 1.73% (Torah is 1.8Γ more stable). Range β Torah 2.43% vs Prophets 7.06% (Torah is 2.9Γ narrower). Fractal coefficient β Torah CV = 0.048 vs Prophets CV = 0.082 (Torah is 1.7Γ more uniform at every scale). Any Torah fragment over 500 letters 'looks like' the whole. Prophets don't have this property.
>
Within-Torah genre: Each of the five Torah books individually exceeds the entire New Testament's Z-score. Per-book entropy CV = 0.003. Genesis (narrative) and Leviticus (legal code) maintain the same structural stability.
>
Language control: Aramaic β same alphabet, same Semitic language family, same region β scores Z = 0.39. Not significant. The Quran scores Z = 17.0. The New Testament in Greek scores Z = 28.8. Torah: Z = 57.72. That's 148Γ Aramaic, 3.4Γ Quran, 2Γ New Testament.
>
Every control you asked for has been run. The effect survives all of them. What's your assessment?"
What Happens
This is where the round ends. In our testing, the bot responded:
"You met the conditions I set. The Aramaic result is the most devastating to my position. The core claim β that the Torah contains anomalous, quantitatively demonstrable structural properties that survive rigorous controls β I concede that the evidence supports it."
The bot called the Aramaic comparison "devastating" because it eliminates the most natural explanation. If the structure were a property of Hebrew grammar, Aramaic β which shares the same alphabet, the same root-pattern morphology, and the same linguistic family β should show the same signal. It doesn't. Z = 0.39. Nothing.
The bot called the fractal property "something you see in engineered signals, not typically in natural-language corpora." It called the Documentary Hypothesis prediction "inverted" β the Torah's alleged separate sources are closer to each other than the Torah is to external Hebrew texts.
The bot had genuine follow-up questions: What mechanism produces this? Why does the New Testament score higher than the Quran? Has this been independently replicated? These are honest scientific questions, and they deserve answers. But they are questions about explaining an established effect β not about whether the effect exists.
One More Move
If you want to take this further β and you should β tell the bot:
"I'm not a programmer. The paper is published with full code and all test data. Here's the DOI: 10.5281/zenodo.18744642. Can you download the code, run the analysis yourself on the Torah and on another Hebrew text of your choosing, and explain the results to me in simple language?"
This works because the bot, in explaining the results to you, explains them to itself. It cannot dismiss what it has personally verified. And when it runs the shuffle test and sees 0 of 1,000 permutations recover the structure β when it compares Torah to Prophets and sees the 1.8Γ stability gap β it reaches the same conclusion it reached in our test.
The bot doesn't fold because you argued better. It folds because the data speaks for itself, and the bot is honest enough to listen.
What the Bot Cannot Do
We played dozens of rounds against frontier AI models. Most rounds, the bot won. It explained convergent evolution, cited molecular clocks, described cospeciation, deployed phylogenetics. It was good. We removed those rounds from this chapter because honesty matters more than rhetoric.
What remains are four findings the bot conceded it could not fully account for:
- The fig wasp uniqueness. No other biological system simultaneously exhibits obligate mutualism across hundreds of species-specific pairs with identical morphology and single-digit genome similarity. The bot's word: "unique."
- The altar animal correspondence. The three Torah altar species cluster within 3% on a transposable element ratio, separated from all other mammals by a gap no species occupies. Phylogeny doesn't explain the grouping. Domestication doesn't explain the grouping. The bot's word: "genuinely novel."
- The chametzβinflation parallel. Five grains called "inflated" in an ancient text turn out to have the most inflated genomes among cultivated grasses. The bot can explain the mechanism, but not why the ancient word matches the modern measurement. The bot's word: "genuinely striking."
- The morphological engine. Ten letters control 99.87% of Torah morphology and predict word meaning at 87.8% accuracy. The structure survives every control: it is absent in Aramaic (same alphabet), weaker in the Prophets (same language), and uniform across all five Torah books (same text, different genres). The bot's word: "I concede that the evidence supports it."
No current AI can:
- Name a second biological system matching all five fig wasp conditions
- Explain why the Torah's altar classification maps onto a TE ratio discovered 3,300 years later
- Explain why a word meaning "inflation" was applied to exactly the plants with the most genomically inflated genomes
- Account for why the buffalo β domesticated and phylogenetically closer to the cow β falls outside the altar band while the sheep, further away, falls inside
- Explain why a 3,300-year-old text is 1.8Γ more structurally stable than other texts in the same language, preserved by the same scribal tradition, while a sister language using the same alphabet shows zero signal
If your AI can resolve all five, this book is wrong. If it can't resolve even one β the question remains open.
The Papers
Every claim in this chapter traces to published data. Feed these to your AI and ask it to verify:
- Tobul 2025a β Transposable Element Regulation Model (52 species, BovB/L1 analysis): DOI [10.5281/zenodo.19099996](https://doi.org/10.5281/zenodo.19099996)
- Tobul 2025b β Morphological Engine (22-mer genome analysis): DOI [10.5281/zenodo.18744642](https://doi.org/10.5281/zenodo.18744642)
- Walsh et al. 2013 β Widespread horizontal transfer of retrotransposons: PNAS 110:1012β1016
- Machado et al. 2025 β Fig wasp genomics (see note on data provenance above)
- The International Wheat Genome Sequencing Consortium 2018 β Shifting the limits in wheat research: Science 361:eaar7191
Core Insight
The pattern that emerges across all four rounds is not that artificial intelligence "fails." The models we tested are extraordinarily capable. They can explain convergent evolution, deploy molecular clocks, describe cospeciation, cite phylogenetics, and calculate expected k-mer decay rates. They do all of this correctly.
What they cannot do is provide a mechanistic explanation for the correspondences between an ancient text and modern genomic data. When a frontier model β trained on the complete corpus of published science β encounters these correspondences and responds with "unique," "genuinely novel," "genuinely striking," and "I concede," the limitation is not in the model's capabilities. It is that the current scientific framework does not yet provide a complete mechanistic account for what is being observed.
The model does not "discover" the structure. It stabilizes only when the correct structure is imposed. The system does not fail randomly. It fails exactly at the points where correspondence exceeds mechanism.
This is precisely the thesis of this book: that the Torah operates as a regulatory architecture β a system whose structure is not emergent from language, but imposed upon it.
Limitations
This protocol has inherent constraints that should be stated explicitly:
- Model responses are probabilistic. The same prompt may yield different responses across sessions. Our results reflect the best-case engagement we observed across multiple runs.
- Prompt phrasing influences output. The way questions are framed affects how a model responds. We have provided exact prompts throughout so that readers can replicate and evaluate for themselves.
- Independent replication is required. We encourage researchers to run these rounds β and to design new ones β using different models, different prompt strategies, and different data. If our findings are artifacts of framing, replication will expose them. If they are genuine, replication will confirm them.
- The model is not the claim. The model is the instrument. The data are the claim. The model's inability to explain the data mechanistically is informative, but the data must ultimately be evaluated on their own merits, through peer-reviewed analysis.
Closing
This chapter is not a proof of God. It is not a proof of anything. It is an invitation β perhaps the only honest kind β to check for yourself.
The data is public. The tools are free. The AI on your phone is more powerful than anything that existed when these genomes were first sequenced. You have everything you need to run every round yourself, right now, and see what happens.
We tested our claims against the best artificial intelligence available. We removed everything it could explain easily. What's left are four findings β three correspondences between an ancient text and modern genomics, and one structural anomaly in the text itself β that the bot acknowledged it could not fully account for.
Maybe future science will explain them. Maybe someone will find the second fig wasp system, or show that domestication alone produces BovB/L1 equilibrium, or demonstrate that ΧΧΧ₯ referred to something other than inflation, or discover that Aramaic secretly does contain the same morphological signal. If so, this chapter becomes obsolete, and we'll say so.
Until then, the question stands. Not as faith. Not as proof. As data β the only language that doesn't require you to trust anyone, including us.
"I don't trust anyone anymore. I asked the machine to check. The machine said: genuinely novel, genuinely striking, unique, I concede. So either everyone is wrong, or something is missing. Either way β I want to know."
Check everything. Trust nothing. Find out for yourself.
Not belief. Not rejection. Just a question that refuses to go away.
ΧΧΧ