R readabilitycheck v1
Formula reference

Automated Readability Index Calculator

The Automated Readability Index (ARI) returns a US school grade level for any piece of text. Unlike Flesch-Kincaid or SMOG, it uses character counts instead of syllable counts — a design choice that makes it fast, deterministic, and especially well-suited to technical, educational, and machine-processed content.

Score your own text Open the calculator →

What the Automated Readability Index measures

ARI was developed by E. A. Smith and R. J. Senter in 1967 for the US Air Force, who needed a readability metric that could be calculated by mid-1960s computers without custom syllable-counting software. The team realised that average word length in characters was nearly as good a predictor of reading difficulty as syllable count — and was trivial to compute on any text-processing system.

The formula combines average word length (characters per word) and average sentence length (words per sentence) and produces a single integer grade level. ARI is conventionally rounded up: a raw score of 7.2 is reported as grade 8, because the formula was designed to err on the side of placing readers comfortably above the text's difficulty rather than just at it.

How to interpret your ARI score

ARIReading levelTypical content
1–4ElementaryChildren's books, beginner readers
5–6Middle schoolYoung adult content, plain-language web
7–8High school entryMost blog posts, mainstream news
9–11High schoolNewspapers of record, business writing
12–14College entryWhite papers, technical documentation
15+College / GraduateAcademic prose, scientific writing

Because ARI rounds up, the practical reading bands are slightly more conservative than the raw grade numbers suggest. An ARI of 8 means "an 8th grader can read this comfortably" rather than "this is exactly 8th-grade level."

When to use the Automated Readability Index

  • Technical writing — engineering documentation, equipment manuals, software specifications.
  • Educational publishing — textbook leveling, classroom material grading.
  • Government and military training material — original use case; still common in defence and aerospace technical writing.
  • CMS plugins and SEO tools — fast to compute, no syllable dictionary required.
  • Translated content review — character counts are stable across English variants and most Western European languages.
  • Cross-validation — when Flesch-Kincaid and Coleman-Liau disagree, ARI provides a third independent reading.

How ARI is calculated

ARI = 4.71 × (characters / words) + 0.5 × (words / sentences) − 21.43
then rounded up to the nearest integer

Average word length (characters per word) is multiplied by 4.71. Average sentence length (words per sentence) is multiplied by 0.5. Subtract a constant to recentre the scale on US grade levels, round up to the next integer, and you have your ARI.

Because the character-per-word coefficient (4.71) is nearly 10× the words-per-sentence coefficient (0.5), word length dominates the score. Cutting one character from each word in a 100-word passage moves the ARI more than cutting two words from each sentence. Vocabulary substitution is the strongest editing lever — the same pattern that holds for Coleman-Liau.

ARI vs other readability formulas

Coleman-Liau is ARI's closest sibling — both use character counts, both produce US grade levels, both originated as machine-friendly alternatives to syllable-based formulas. The practical difference: ARI weights sentence length more heavily. Long-sentence-but-short-word writing scores higher on ARI than on Coleman-Liau; short-sentence-but-long-word writing does the opposite.

Compared to Flesch-Kincaid Grade Level, ARI is generally within 1 to 2 grades on the same passage. The two formulas can diverge sharply on text with unusual vocabulary — a passage full of long proper nouns will Flesch-Kincaid lower (the syllable counter undercounts proper nouns) and ARI higher (characters are characters). When the two disagree, ARI is usually the more conservative read.

SMOG and Gunning Fog use complex-word percentages rather than character counts. They penalise jargon-heavy writing more aggressively than ARI does — appropriate for healthcare and policy contexts where jargon directly impedes comprehension, less appropriate for technical writing where some jargon is unavoidable.

Frequently asked questions

What is the Automated Readability Index?

A US grade-level readability formula that uses character and word counts rather than syllables. Developed for the US Air Force in 1967.

What is a good ARI score?

Target 7–9 for general writing, 5–7 for marketing copy, 11–16 for technical and academic content. ARI is conventionally rounded up to the nearest integer.

How is ARI different from Coleman-Liau?

Both use character counts and produce US grade levels, but ARI weights sentence length more heavily. They usually agree within 1–2 grades. ARI scores higher on long sentences; Coleman-Liau scores higher on long words.

Why does ARI round up?

The original 1967 specification rounds to the next integer because partial grade levels were impractical for the formula's intended use — placing Air Force training manuals at appropriate reading levels for enlisted personnel. The convention has stuck.

Where did ARI come from?

E. A. Smith and R. J. Senter developed it for the US Air Force in 1967, as part of a project to mechanise readability scoring for technical training material.

How do I lower my ARI score?

Shorten words. The character-per-word term is roughly 10× the weight of the sentence-length term, so vocabulary substitution moves the score faster than splitting sentences.