Evan Chen《陳誼廷》

Email

  •   CV

Teaching (otis),   •   otis excerpts,   •   mock aime,   •   for beginners,   •   for coaches,   •   problems,   •   mop,   •   elmo,   •   usemo, personal/hobbies,   •   puzzle hunts,   •   games,   •   photos, youtube/twitch,   •   discord, plz learn code,   •   filesys concepts,   •   learning path,   •   latex style,   •   asy guide, publications,   •   egmo book,   •   napkin (v1.6),   •   course notes,   •   18.02 recitation, recommendations,   •   mentors,   •   quotes,   •   faqs,   •   rec letters.

Buy Me a Coffee at ko-fi.com

Math olympiad beginner's page

This page is meant for people who don’t have much past olympiad/proof experience and are looking to get started. If you aren’t interested in proof-based problems yet, then this page is not for you. Try checking FAQ C-0 if you are totally new to math contests.

Before all else, welcome to the olympiad scene! It’s going to be hard as heck, but in my private opinion this is where all the coolest stuff is (as far as math contests go, anyway 1 ). Stay around for long enough, and you will get to see a lot of really amazing problems.

You may also read math contest FAQs for some more philosophical (and less concrete) advice on studying for math contests in general.

0. Syllabus #

I wrote an unofficial syllabus for math olympiads (also linked on handouts ) giving some guidance on what topics appear on math olympiads.

1. First reading: the welcome letter #

For the USA Math Olympiad in 2020, the board of the olympiad prepared an invitation letter 2 for all the qualifiers, congratulating them on their achievement and giving them some suggestions on where to begin. This letter contains:

  • A few pretty carefully chosen problems (not necessarily easy!), to give people a sense of what to expect on the contest
  • Fully written solutions to those chosen problems, so that you can see what a correct and complete solution is expected to look like.
  • Some advice for actually taking the contest: the format of the exam, planning your time, common mistakes, etc.

You can download the letters here:

  • Welcome letter for junior students (9th-10th grade) , and solutions to examples
  • Welcome letter for older students (11th-12th grade) , and solutions to examples

I suggest starting by reading through this letter, trying the example problems (you will probably not solve them all; we chose examples from the entire difficulty spectrum), and then comparing your work to the provided solutions. That will give you a taste of what you are getting in to.

2. Writing proofs #

If you don’t have experience with proof-based problems, the first thing I should say is that it is not as hard as you might think . Solving the problems completely is difficult, but if you really have a completely correct solution to a problem, it is actually pretty hard to not get full credit. I would say at least 90% of the time, when a student loses points on the USA(J)MO unexpectedly, it’s because their proof is actually incomplete, not (just) badly-written.

Of course, you should still try to write your solutions as clearly as possible. To that end, here are some links to advice:

  • How to write a solution , from Art of Problem Solving.
  • Remarks on English , written by me.
  • How to write proofs , by Larry W. Cusick.

You don’t need to get too caught up in these links; proof-writing will become more natural over time as you solve more problems. So I would encourage you to continue doing practice problems or reading books at the same time as you are getting used to writing; these go hand-in-hand and I actually suspect it’s counterproductive to try to practice writing in isolation.

If possible, the best way is to have a friend or coach who can check your work and provide suggestions. But the supply of people willing to do this is admittedly very low, so most people are not so lucky to have access to feedback. Almost everyone gets by instead with something like the following algorithm:

  • Write up your solution neatly.
  • Look up the problem on AoPS contest index 4 ; and compare your solution to those by reputable users.
  • Edit your solution and post it on the thread. By Cunningham’s Law , wrong solutions are often exposed quite rapidly.

Together these three steps should catch “most” substantial errors. See Section B.1 of my English handout for more details about this procedure.

If you want a book to follow, the one I grew up with was Joseph J Rotman’s Journey into Mathematics: An Introduction to Proofs . 3

If you like excessive information, you might also read my handout Intro to Proofs for the Morbidly Curious .

3. (For USA) United States Mathematical Talent Search and USEMO #

If you are in the United States, there is a nice proof-based contest called the USAMTS which is a great way for beginners to get started. Things that make the USAMTS special:

  • It is a free, individual, online contest open to any students in the USA.
  • The problems are chosen to be quite beginner-friendly, though with a spectrum of difficulty each round.
  • This contest gives you a full month to work on the problems rather than having a short time limit.
  • You get some feedback on your proofs as well, not just a score.

I also run a contest called the USEMO in the fall which is also free and offers feedback, but it is more difficult since is intended to mimic the USA Math Olympiad and International Math Olympiad in format and difficulty. One could try using it as “practice IMO” in the fall.

4. Books to read #

There is some more material you have to learn as well, since there are some new classes of problems (such as olympiad geometry, functional equations, orders mod p, etc.) that you will likely not have seen before from just working on short-answer contests.

Some possible suggestions for introductory books:

  • General: Art and Craft of Problem-Solving by Paul Zeitz is a good “first book” for all the fields.
  • Geometry: My book E.G.M.O. ; and more alternatives are linked at the bottom of that page.
  • Number theory: Modern Olympiad Number Theory is the most comprehensive text I know of now.

The OTIS Excerpts has beginner introductions for several topics:

  • Inequalities (chapters 1-2)
  • Functional equations (chapters 3-4)
  • Combinatorics (chapters 6-9)

More possibilities (including intermediate-advanced texts not listed here) are on the links page . You might also check Geoff Smith’s advice and links .

I really want to stress these are mere suggestions . Just because you have done X does not mean you will achieve your goals, and conversely, there are surely many fantastic resources that I have not even heard of. If you are looking for a list of materials which are “guaranteed to be enough” for solving IMO #1 and #4, ask elsewhere.

5. Problem sources #

At some point (sooner rather than later), you also need to start just working through some past problems from recent years of contests. You can think of this as encountering problems in the wild. 5

In case you didn’t know already, on Art of Problem-Solving there is an extensive archive of past problems from basically every competition under the sun, together with community-contributed solutions. The supply of problems here is inexhaustible.

Here are some particular contests I like (alphabetical):

  • Canada national olympiad
  • European Girls Math Olympiad
  • IMO and IMO Shortlist
  • NICE , open to anyone
  • USA Team Selection Tests

The bottom of the recommendations page has some more suggestions for problems if this list isn’t sufficient.

Good luck and happy solving!

Insomuch as contest problems have intrinsic artistic value, proof-based exams are a more versatile medium than the short-answer exams, much like how videos are more versatile than static photos. In this analogy, videos don’t make stills worthless or obsolete; they aren’t automatically better, either. But as a medium, they expand the space of ideas an artist could express, at the cost of being proportionately more work to create.  ↩

These were written in early January 2020 before COVID-19 wreaked havoc on everything, so the contests still go by their typical name and don’t mention anything specific to the belated administration that year.  ↩

It shares my philosophy that teaching proof-based classes by force-feeding set theory notation is not particularly helpful, and instead develops proof-writing by discussing real mathematical content from geometry, number theory, etc. rather than being overly focused on bookkeeping and formalism.  ↩

I do NOT recommend using the AoPS Wiki in place of the Contest Index. The solution quality in the wiki is generally much poorer than the forum.  ↩

I used to carry a binder with printouts of the IMO shortlist and check them off as I solved them.  ↩

  • SPECIAL EDITION ABACUS
  • Current RS2 User
  • Individual Items
  • Online Classes
  • RightStart™ E-Books
  • Art of Problem Solving
  • Jacob's
  • Homeschool Planet Lesson Plans
  • Tutor Topics
  • Supplements & Games
  • Placement Test
  • Central African CFA Franc
  • West African CFA Franc
  • Compare ( )
  • Gift Certificates
  • Sign in or Register

RightStart™ Mathematics by Activities for Learning

  • Home School
  • High School

Art of Problem Solving Introduction to Geometry

Write a review.

Art of Problem Solving Introduction to Geometry

  • Create New Wish List
  • Description

Learn the fundamentals of geometry from former USA Mathematical Olympiad winner Richard Rusczyk. Topics covered in the book include similar triangles, congruent triangles, quadrilaterals, polygons, circles, funky areas, power of a point, three-dimensional geometry, transformations, and much more.  

The text is structured to inspire the reader to explore and develop new ideas. Each section starts with problems, so the student has a chance to solve them without help before proceeding. The text then includes solutions to these problems, through which geometric techniques are taught. Important facts and powerful problem solving approaches are highlighted throughout the text. In addition to the instructional material, the book contains over 900 problems. The solutions manual contains full solutions to all of the problems, not just answers.  

This book can serve as a complete geometry course, and is ideal for students who have mastered basic algebra, such as solving linear equations. Middle school students preparing for MATHCOUNTS, high school students preparing for the AMC, and other students seeking to master the fundamentals of geometry will find this book an instrumental part of their mathematics libraries.

This set includes both the lessons and solutions manuals.

Related Products

Art of Problem Solving Introduction to Algebra

Art of Problem Solving Introduction to Algebra

Art of Problem Solving Precalculus

Art of Problem Solving Precalculus

Art of Problem Solving Calculus

Art of Problem Solving Calculus

Art of Problem Solving Intermediate Algebra

Art of Problem Solving Intermediate Algebra

Geometry Reflector

Geometry Reflector

The Geometry Reflector is used to explore reflections, congruence, and symmetry. It measures 3-3/4"

RS2 Geometry Set

RS2 Geometry Set

If new to RightStart Math, this set will be used for Levels G and H. All the manipulatives for RS2 L

RightStart™ Geometry Tools

RightStart™ Geometry Tools

The RightStart™ Geometry Set includes: Dry Erase Board T-Square 30°-60° Triangle 45°-90° Trian

RightStart™ Geometry Cards

RightStart™ Geometry Cards

The RightStart™ Geometry Cards include: 84 colored cards showing the completed figure on one side an

Jacobs' Geometry (Curriculum Pack)

Jacobs' Geometry (Curriculum Pack)

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 January 2024

Solving olympiad geometry without human demonstrations

  • Trieu H. Trinh   ORCID: orcid.org/0000-0003-3597-6073 1 , 2 ,
  • Yuhuai Wu 1 ,
  • Quoc V. Le 1 ,
  • He He 2 &
  • Thang Luong 1  

Nature volume  625 ,  pages 476–482 ( 2024 ) Cite this article

269k Accesses

32 Citations

1025 Altmetric

Metrics details

  • Computational science
  • Computer science

An Author Correction to this article was published on 23 February 2024

This article has been updated

Proving mathematical theorems at the olympiad level represents a notable milestone in human-level automated reasoning 1 , 2 , 3 , 4 , owing to their reputed difficulty among the world’s best talents in pre-university mathematics. Current machine-learning approaches, however, are not applicable to most mathematical domains owing to the high cost of translating human proofs into machine-verifiable format. The problem is even worse for geometry because of its unique translation challenges 1 , 5 , resulting in severe scarcity of training data. We propose AlphaGeometry, a theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by synthesizing millions of theorems and proofs across different levels of complexity. AlphaGeometry is a neuro-symbolic system that uses a neural language model, trained from scratch on our large-scale synthetic data, to guide a symbolic deduction engine through infinite branching points in challenging problems. On a test set of 30 latest olympiad-level problems, AlphaGeometry solves 25, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist. Notably, AlphaGeometry produces human-readable proofs, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004.

Similar content being viewed by others

art of problem solving olympiad geometry

Advancing mathematics by guiding human intuition with AI

art of problem solving olympiad geometry

Rigor with machine learning from field theory to the Poincaré conjecture

art of problem solving olympiad geometry

Automated discovery of algorithms from data

Proving theorems showcases the mastery of logical reasoning and the ability to search through an infinitely large space of actions towards a target, signifying a remarkable problem-solving skill. Since the 1950s (refs.  6 , 7 ), the pursuit of better theorem-proving capabilities has been a constant focus of artificial intelligence (AI) research 8 . Mathematical olympiads are the most reputed theorem-proving competitions in the world, with a similarly long history dating back to 1959, playing an instrumental role in identifying exceptional talents in problem solving. Matching top human performances at the olympiad level has become a notable milestone of AI research 2 , 3 , 4 .

Theorem proving is difficult for learning-based methods because training data of human proofs translated into machine-verifiable languages are scarce in most mathematical domains. Geometry stands out among other olympiad domains because it has very few proof examples in general-purpose mathematical languages such as Lean 9 owing to translation difficulties unique to geometry 1 , 5 . Geometry-specific languages, on the other hand, are narrowly defined and thus unable to express many human proofs that use tools beyond the scope of geometry, such as complex numbers (Extended Data Figs. 3 and 4 ). Overall, this creates a data bottleneck, causing geometry to lag behind in recent progress that uses human demonstrations 2 , 3 , 4 . Current approaches to geometry, therefore, still primarily rely on symbolic methods and human-designed, hard-coded search heuristics 10 , 11 , 12 , 13 , 14 .

We present an alternative method for theorem proving using synthetic data, thus sidestepping the need for translating human-provided proof examples. We focus on Euclidean plane geometry and exclude topics such as geometric inequalities and combinatorial geometry. By using existing symbolic engines on a diverse set of random theorem premises, we extracted 100 million synthetic theorems and their proofs, many with more than 200 proof steps, four times longer than the average proof length of olympiad theorems. We further define and use the concept of dependency difference in synthetic proof generation, allowing our method to produce nearly 10 million synthetic proof steps that construct auxiliary points, reaching beyond the scope of pure symbolic deduction. Auxiliary construction is geometry’s instance of exogenous term generation, representing the infinite branching factor of theorem proving, and widely recognized in other mathematical domains as the key challenge to proving many hard theorems 1 , 2 . Our work therefore demonstrates a successful case of generating synthetic data and learning to solve this key challenge. With this solution, we present a general guiding framework and discuss its applicability to other domains in Methods section ‘AlphaGeometry framework and applicability to other domains’.

We pretrain a language model on all generated synthetic data and fine-tune it to focus on auxiliary construction during proof search, delegating all deduction proof steps to specialized symbolic engines. This follows standard settings in the literature, in which language models such as GPT-f (ref.  15 ), after being trained on human proof examples, can generate exogenous proof terms as inputs to fast and accurate symbolic engines such as nlinarith or ring 2 , 3 , 16 , using the best of both worlds. Our geometry theorem prover AlphaGeometry, illustrated in Fig. 1 , produces human-readable proofs, substantially outperforms the previous state-of-the-art geometry-theorem-proving computer program and approaches the performance of an average IMO gold medallist on a test set of 30 classical geometry problems translated from the IMO as shown in Fig. 2 .

figure 1

The top row shows how AlphaGeometry solves a simple problem. a , The simple example and its diagram. b , AlphaGeometry initiates the proof search by running the symbolic deduction engine. The engine exhaustively deduces new statements from the theorem premises until the theorem is proven or new statements are exhausted. c , Because the symbolic engine fails to find a proof, the language model constructs one auxiliary point, growing the proof state before the symbolic engine retries. The loop continues until a solution is found. d , For the simple example, the loop terminates after the first auxiliary construction “D as the midpoint of BC”. The proof consists of two other steps, both of which make use of the midpoint properties: “BD = DC” and “B, D, C are collinear”, highlighted in blue. The bottom row shows how AlphaGeometry solves the IMO 2015 Problem 3 (IMO 2015 P3). e , The IMO 2015 P3 problem statement and diagram. f , The solution of IMO 2015 P3 has three auxiliary points. In both solutions, we arrange language model outputs (blue) interleaved with symbolic engine outputs to reflect their execution order. Note that the proof for IMO 2015 P3 in f is greatly shortened and edited for illustration purposes. Its full version is in the Supplementary Information .

figure 2

The test benchmark includes official IMO problems from 2000 to the present that can be represented in the geometry environment used in our work. Human performance is estimated by rescaling their IMO contest scores between 0 and 7 to between 0 and 1, to match the binary outcome of failure/success of the machines. For example, a contestant’s score of 4 out of 7 will be scaled to 0.57 problems in this comparison. On the other hand, the score for AlphaGeometry and other machine solvers on any problem is either 0 (not solved) or 1 (solved). Note that this is only an approximate comparison with humans on classical geometry, who operate on natural-language statements rather than narrow, domain-specific translations. Further, the general IMO contest also includes other types of problem, such as geometric inequality or combinatorial geometry, and other domains of mathematics, such as algebra, number theory and combinatorics.

Source Data

Synthetic theorems and proofs generation

Our method for generating synthetic data is shown in Fig. 3 . We first sample a random set of theorem premises, serving as the input to the symbolic deduction engine to generate its derivations. A full list of actions used for this sampling can be found in Extended Data Table 1 . In our work, we sampled nearly 1 billion of such premises in a highly parallelized setting, described in Methods . Note that we do not make use of any existing theorem premises from human-designed problem sets and sampled the eligible constructions uniformly randomly.

figure 3

a , We first sample a large set of random theorem premises. b , We use the symbolic deduction engine to obtain a deduction closure. This returns a directed acyclic graph of statements. For each node in the graph, we perform traceback to find its minimal set of necessary premise and dependency deductions. For example, for the rightmost node ‘HA  ⊥  BC’, traceback returns the green subgraph. c , The minimal premise and the corresponding subgraph constitute a synthetic problem and its solution. In the bottom example, points E and D took part in the proof despite being irrelevant to the construction of HA and BC; therefore, they are learned by the language model as auxiliary constructions.

Next we use a symbolic deduction engine on the sampled premises. The engine quickly deduces new true statements by following forward inference rules as shown in Fig. 3b . This returns a directed acyclic graph of all reachable conclusions. Each node in the directed acyclic graph is a reachable conclusion, with edges connecting to its parent nodes thanks to the traceback algorithm described in Methods . This allows a traceback process to run recursively starting from any node N , at the end returning its dependency subgraph G ( N ), with its root being N and its leaves being a subset of the sampled premises. Denoting this subset as P , we obtained a synthetic training example (premises, conclusion, proof) = ( P ,  N ,  G ( N )).

In geometry, the symbolic deduction engine is deductive database (refs.  10 , 17 ), with the ability to efficiently deduce new statements from the premises by means of geometric rules. DD follows deduction rules in the form of definite Horn clauses, that is, Q ( x ) ←  P 1 ( x ),…,  P k ( x ), in which x are points objects, whereas P 1 ,…,  P k and Q are predicates such as ‘equal segments’ or ‘collinear’. A full list of deduction rules can be found in ref.  10 . To widen the scope of the generated synthetic theorems and proofs, we also introduce another component to the symbolic engine that can deduce new statements through algebraic rules (AR), as described in Methods . AR is necessary to perform angle, ratio and distance chasing, as often required in many olympiad-level proofs. We included concrete examples of AR in Extended Data Table 2 . The combination DD + AR, which includes both their forward deduction and traceback algorithms, is a new contribution in our work and represents a new state of the art in symbolic reasoning in geometry.

Generating proofs beyond symbolic deduction

So far, the generated proofs consist purely of deduction steps that are already reachable by the highly efficient symbolic deduction engine DD + AR. To solve olympiad-level problems, however, the key missing piece is generating new proof terms. In the above algorithm, it can be seen that such terms form the subset of P that N is independent of. In other words, these terms are the dependency difference between the conclusion statement and the conclusion objects. We move this difference from P to the proof so that a generative model that learns to generate the proof can learn to construct them, as illustrated in Fig. 3c . Such proof steps perform auxiliary constructions that symbolic deduction engines are not designed to do. In the general theorem-proving context, auxiliary construction is an instance of exogenous term generation, a notable challenge to all proof-search algorithms because it introduces infinite branching points to the search tree. In geometry theorem proving, auxiliary constructions are the longest-standing subject of study since inception of the field in 1959 (refs.  6 , 7 ). Previous methods to generate them are based on hand-crafted templates and domain-specific heuristics 8 , 9 , 10 , 11 , 12 , and are, therefore, limited by a subset of human experiences expressible in hard-coded rules. Any neural solver trained on our synthetic data, on the other hand, learns to perform auxiliary constructions from scratch without human demonstrations.

Training a language model on synthetic data

The transformer 18 language model is a powerful deep neural network that learns to generate text sequences through next-token prediction, powering substantial advances in generative AI technology. We serialize ( P ,  N ,  G ( N )) into a text string with the structure ‘<premises><conclusion><proof>’. By training on such sequences of symbols, a language model effectively learns to generate the proof, conditioning on theorem premises and conclusion.

Combining language modelling and symbolic engines

On a high level, proof search is a loop in which the language model and the symbolic deduction engine take turns to run, as shown in Fig. 1b,c . Proof search terminates whenever the theorem conclusion is found or when the loop reaches a maximum number of iterations. The language model is seeded with the problem statement string and generates one extra sentence at each turn, conditioning on the problem statement and past constructions, describing one new auxiliary construction such as “construct point X so that ABCX is a parallelogram”. Each time the language model generates one such construction, the symbolic engine is provided with new inputs to work with and, therefore, its deduction closure expands, potentially reaching the conclusion. We use beam search to explore the top k constructions generated by the language model and describe the parallelization of this proof-search algorithm in Methods .

Empirical evaluation

An olympiad-level benchmark for geometry.

Existing benchmarks of olympiad mathematics do not cover geometry because of a focus on formal mathematics in general-purpose languages 1 , 9 , whose formulation poses great challenges to representing geometry. Solving these challenges requires deep expertise and large research investment that are outside the scope of our work, which focuses on a methodology for theorem proving. For this reason, we adapted geometry problems from the IMO competitions since 2000 to a narrower, specialized environment for classical geometry used in interactive graphical proof assistants 13 , 17 , 19 , as discussed in Methods . Among all non-combinatorial geometry-related problems, 75% can be represented, resulting in a test set of 30 classical geometry problems. Geometric inequality and combinatorial geometry, for example, cannot be translated, as their formulation is markedly different to classical geometry. We include the full list of statements and translations for all 30 problems in the  Supplementary Information . The final test set is named IMO-AG-30, highlighting its source, method of translation and its current size.

Geometry theorem prover baselines

Geometry theorem provers in the literature fall into two categories. The first category is computer algebra methods, which treats geometry statements as polynomial equations of its point coordinates. Proving is accomplished with specialized transformations of large polynomials. Gröbner bases 20 and Wu’s method 21 are representative approaches in this category, with theoretical guarantees to successfully decide the truth value of all geometry theorems in IMO-AG-30, albeit without a human-readable proof. Because these methods often have large time and memory complexity, especially when processing IMO-sized problems, we report their result by assigning success to any problem that can be decided within 48 h using one of their existing implementations 17 .

AlphaGeometry belongs to the second category of solvers, often described as search/axiomatic or sometime s ‘synthetic’ methods. These methods treat the problem of theorem proving as a step-by-step search problem using a set of geometry axioms. Thanks to this, they typically return highly interpretable proofs accessible to human readers. Baselines in this category generally include symbolic engines equipped with human-designed heuristics. For example, Chou et al. provided 18 heuristics such as “If OA  ⊥  OB and OA = OB, construct C on the opposite ray of OA such that OC = OA ” , besides 75 deduction rules for the symbolic engine. Large language models 22 , 23 , 24 such as GPT-4 (ref.  25 ) can be considered to be in this category. Large language models have demonstrated remarkable reasoning ability on a variety of reasoning tasks 26 , 27 , 28 , 29 . When producing full natural-language proofs on IMO-AG-30, however, GPT-4 has a success rate of 0%, often making syntactic and semantic errors throughout its outputs, showing little understanding of geometry knowledge and of the problem statements itself. Note that the performance of GPT-4 performance on IMO problems can also be contaminated by public solutions in its training data. A better GPT-4 performance is therefore still not comparable with other solvers. In general, search methods have no theoretical guarantee in their proving performance and are known to be weaker than computer algebra methods 13 .

Synthetic data generation rediscovers known theorems and beyond

We find that our synthetic data generation can rediscover some fairly complex theorems and lemmas known to the geometry literature, as shown in Fig. 4 , despite starting from randomly sampled theorem premises. This can be attributed to the use of composite actions described in Extended Data Table 1 , such as ‘taking centroid’ or ‘taking excentre’, which—by chance—sampled a superset of well-known theorem premises, under our large-scale exploration setting described in Methods . To study the complexity of synthetic proofs, Fig. 4 shows a histogram of synthetic proof lengths juxtaposed with proof lengths found on the test set of olympiad problems. Although the synthetic proof lengths are skewed towards shorter proofs, a small number of them still have lengths up to 30% longer than the hardest problem in the IMO test set. We find that synthetic theorems found by this process are not constrained by human aesthetic biases such as being symmetrical, therefore covering a wider set of scenarios known to Euclidean geometry. We performed deduplication as described in Methods , resulting in more than 100 millions unique theorems and proofs, and did not find any IMO-AG-30 theorems, showing that the space of possible geometry theorems is still much larger than our discovered set.

figure 4

Of the generated synthetic proofs, 9% are with auxiliary constructions. Only roughly 0.05% of the synthetic training proofs are longer than the average AlphaGeometry proof for the test-set problems. The most complex synthetic proof has an impressive length of 247 with two auxiliary constructions. Most synthetic theorem premises tend not to be symmetrical like human-discovered theorems, as they are not biased towards any aesthetic standard.

Language model pretraining and fine-tuning

We first pretrained the language model on all 100 million synthetically generated proofs, including ones of pure symbolic deduction. We then fine-tuned the language model on the subset of proofs that requires auxiliary constructions, accounting for roughly 9% of the total pretraining data, that is, 9 million proofs, to better focus on its assigned task during proof search.

Proving results on IMO-AG-30

The performance of ten different solvers on the IMO-AG-30 benchmark is reported in Table 1 , of which eight, including AlphaGeometry, are search-based methods. Besides prompting GPT-4 to produce full proofs in natural language with several rounds of reflections and revisions, we also combine GPT-4 with DD + AR as another baseline to enhance its deduction accuracy. To achieve this, we use detailed instructions and few-shot examples in the prompt to help GPT-4 successfully interface with DD + AR, providing auxiliary constructions in the correct grammar. Prompting details of baselines involving GPT-4 is included in the Supplementary Information .

AlphaGeometry achieves the best result, with 25 problems solved in total. The previous state of the art (Wu’s method) solved ten problems, whereas the strongest baseline (DD + AR + human-designed heuristics) solved 18 problems, making use of the algebraic reasoning engine developed in this work and the human heuristics designed by Chou et al. 17 . To match the test time compute of AlphaGeometry, this strongest baseline makes use of 250 parallel workers running for 1.5 h, each attempting different sets of auxiliary constructions suggested by human-designed heuristics in parallel, until success or timeout. Other baselines such as Wu’s method or the full-angle method are not affected by parallel compute resources as they carry out fixed, step-by-step algorithms until termination.

Measuring the improvements made on top of the base symbolic deduction engine (DD), we found that incorporating algebraic deduction added seven solved problems to a total of 14 (DD + AR), whereas the language model’s auxiliary construction remarkably added another 11 solved problems, resulting in a total of 25. As reported in Extended Data Fig. 6 , we find that, using only 20% of the training data, AlphaGeometry still achieves state-of-the-art results with 21 problems solved. Similarly, using less than 2% of the search budget (beam size of 8 versus 512) during test time, AlphaGeometry can still solve 21 problems. On a larger and more diverse test set of 231 geometry problems, which covers textbook exercises, regional olympiads and famous theorems, we find that baselines in Table 1 remain at the same performance rankings, with AlphaGeometry solving almost all problems (98.7%), whereas Wu’s method solved 75% and DD + AR + human-designed heuristics solved 92.2%, as reported in Extended Data Fig. 6b .

Notably, AlphaGeometry solved both geometry problems of the same year in 2000 and 2015, a threshold widely considered difficult to the average human contestant at the IMO. Further, the traceback process of AlphaGeometry found an unused premise in the translated IMO 2004 P1, as shown in Fig. 5 , therefore discovering a more general version of the translated IMO theorem itself. We included AlphaGeometry solutions to all problems in IMO-AG-30 in the  Supplementary Information and manually analysed some notable AlphaGeometry solutions and failures in Extended Data Figs. 2 – 5 . Overall, we find that AlphaGeometry operates with a much lower-level toolkit for proving than humans do, limiting the coverage of the synthetic data, test-time performance and proof readability.

figure 5

Left, top to bottom, the IMO 2004 P1 stated in natural language, its translated statement and AlphaGeometry solution. Thanks to the traceback algorithm necessary to extract the minimal premises, AlphaGeometry identifies a premise unnecessary for the proof to work: O does not have to be the midpoint of BC for P, B, C to be collinear. Right, top, the original theorem diagram; bottom, the generalized theorem diagram, in which O is freed from its midpoint position and P still stays on line BC. Note that the original problem requires P to be between B and C, a condition where the generalized theorem and solution does not guarantee.

Human expert evaluation of AlphaGeometry outputs

Because AlphaGeometry outputs highly interpretable proofs, we used a simple template to automatically translate its solutions to natural language. To obtain an expert evaluation in 2000 and 2015, during which AlphaGeometry solves all geometry problems and potentially passes the medal threshold, we submit these solutions to the USA IMO team coach, who is experienced in grading mathematical olympiads and has authored books for olympiad geometry training. AlphaGeometry solutions are recommended to receive full scores, thus passing the medal threshold of 14/42 in the corresponding years. We note that IMO tests also evaluate humans under three other mathematical domains besides geometry and under human-centric constraints, such as no calculator use or 4.5-h time limits. We study time-constrained settings with 4.5-h and 1.5-h limits for AlphaGeometry in Methods and report the results in Extended Data Fig. 1 .

Learning to predict the symbolic engine’s output improves the language model’s auxiliary construction

In principle, auxiliary construction strategies must depend on the details of the specific deduction engine they work with during proof search. We find that a language model without pretraining only solves 21 problems. This suggests that pretraining on pure deduction proofs generated by the symbolic engine DD + AR improves the success rate of auxiliary constructions. On the other hand, a language model without fine-tuning also degrades the performance but not as severely, with 23 problems solved compared with AlphaGeometry’s full setting at 25.

Hard problems are reflected in AlphaGeometry proof length

Figure 6 measures the difficulty of solved problems using public scores of human contestants at the IMO and plots them against the corresponding AlphaGeometry proof lengths. The result shows that, for the three problems with the lowest human score, AlphaGeometry also requires exceptionally long proofs and the help of language-model constructions to reach its solution. For easier problems (average human score > 3.5), however, we observe no correlation ( p  = −0.06) between the average human score and AlphaGeometry proof length.

figure 6

Among the solved problems, 2000 P6, 2015 P3 and 2019 P6 are the hardest for IMO participants. They also require the longest proofs from AlphaGeometry. For easier problems, however, there is little correlation between AlphaGeometry proof length and human score.

AlphaGeometry is the first computer program to surpass the performance of the average IMO contestant in proving Euclidean plane geometry theorems, outperforming strong computer algebra and search baselines. Notably, we demonstrated through AlphaGeometry a neuro-symbolic approach for theorem proving by means of large-scale exploration from scratch, sidestepping the need for human-annotated proof examples and human-curated problem statements. Our method to generate and train language models on purely synthetic data provides a general guiding framework for mathematical domains that are facing the same data-scarcity problem.

Geometry representation

General-purpose formal languages such as Lean 31 still require a large amount of groundwork to describe most IMO geometry problems at present. We do not directly address this challenge as it requires deep expertise and substantial research outside the scope of theorem-proving methodologies. To sidestep this barrier, we instead adopted a more specialized language used in GEX 10 , JGEX 17 , MMP/Geometer 13 and GeoLogic 19 , a line of work that aims to provide a logical and graphical environment for synthetic geometry theorems with human-like non-degeneracy and topological assumptions. Examples of this language are shown in Fig. 1d,f . Owing to its narrow formulation, 75% of all IMO geometry problems can be adapted to this representation. In this type of geometry environment, each proof step is logically and numerically verified and can also be evaluated by a human reader as if it is written by IMO contestants, thanks to the highly natural grammar of the language. To cover more expressive algebraic and arithmetic reasoning, we also add integers, fractions and geometric constants to the vocabulary of this language. We do not push further for a complete solution to geometry representation as it is a separate and extremely challenging research topic that demands substantial investment from the mathematical formalization community.

Sampling consistent theorem premises

We developed a constructive diagram builder language similar to that used by JGEX 17 to construct one object in the premise at a time, instead of freely sampling many premises that involve several objects, therefore avoiding the generation of a self-contradicting set of premises. An exhaustive list of construction actions is shown in Extended Data Table 1 . These actions include constructions to create new points that are related to others in a certain way, that is, collinear, incentre/excentre etc., as well as constructions that take a number as its parameter, for example, “construct point X such that given a number α , ∠ ABX =  α ”. One can extend this list with more sophisticated actions to describe a more expressive set of geometric scenarios, improving both the synthetic data diversity and the test-set coverage. A more general and expressive diagram builder language can be found in ref.  32 . We make use of a simpler language that is sufficient to describe problems in IMO-AG-30 and can work well with the symbolic engine DD.

The symbolic deduction engine

The core functionality of the engine is deducing new true statements given the theorem premises. Deduction can be performed by means of geometric rules such as ‘If X then Y’, in which X and Y are sets of geometric statements such as ‘A, B, C are collinear’. We use the method of structured DD 10 , 17 for this purpose as it can find the deduction closure in just seconds on standard non-accelerator hardware. To further enhance deduction, we also built into AlphaGeometry the ability to perform deduction through AR. AR enable proof steps that perform angle/ratio/distance chasing. Detailed examples of AR are shown in Extended Data Table 2 . Such proof steps are ubiquitous in geometry proofs, yet not covered by geometric rules. We expand the Gaussian elimination process implemented in GeoLogic 19 to find the deduction closure for all possible linear operators in just seconds. Our symbolic deduction engine is an intricate integration of DD and AR, which we apply alternately to expand the joint closure of known true statements until expansion halts. This process typically finishes within a few seconds to at most a few minutes on standard non-accelerator hardware.

Algebraic reasoning

There has not been a complete treatment for algebraic deduction in the literature of geometry theorem proving. For example, in iGeoTutor 12 , Z3 (ref.  33 ) is used to handle arithmetic inferences but algebraic manipulations are not covered. DD (ref.  17 ) handles algebraic deductions by expressing them under a few limited deduction rules, therefore, it is unable to express more complex manipulations, leaving arithmetic inferences not covered. The most general treatment so far is a process similar that in ref.  34 for angle-only theorem discovery and implemented in GeoLogic 19 for both angle and ratios. We expanded this formulation to cover all reasoning about angles, ratios and distances between points and also arithmetic reasoning with geometric constants such as ‘pi’ or ‘1:2’. Concrete examples of algebraic reasoning are given in Extended Data Table 2 .

On a high level, we first convert the input linear equations to a matrix of their coefficients. In particular, we create a coefficient matrix A   ∈   R M × N in which N is the number of variables and M is the number of input equations. In geometry, any equality is of the form a  −  b  =  c  −  d   ⇔   a  −  b  −  c  +  d  = 0. For example, the angle equality ∠ ABC =  ∠ XYZ is represented as s (AB) −  s (BC) =  s (XY) −  s (YZ), in which s (AB) is the angle between AB and the x-direction, modulo pi. Similarly, ratios AB:CD = EF:GH are represented as log(AB) − log(CD) = log(EF) − log(GH), in which log(AB) is the log of the length of segment AB. For distances, each variable is a (point, line) pair, representing a specific point on a specific line.

Because all equalities are of the form ‘ a  −  b  −  c  +  d  = 0’, we populate the row for each equality with values +1, −1, −1, +1 at columns corresponding to variables a , b , c and d . Running Gaussian elimination on A returns a new matrix with leading 1s at each of the columns, essentially representing each variable as a unique linear combination of all remaining variables. As an example, suppose we have ‘ a  −  b  =  b  −  c ’, ‘ d  −  c  =  a  −  d ’ and ‘ b  −  c  =  c  −  e ’ as input equalities, running the Gaussian elimination process (denoted GE in the following equation) returns the following result:

From this result, we can deterministically and exhaustively deduce all new equalities by checking if x 1  =  x 2 or x 1  −  x 2  =  x 2  −  x 3 or x 1  −  x 2  =  x 3  −  x 4 , in which { x 1 ,  x 2 ,  x 3 ,  x 4 } is any 4-permutation of all variables. In the above Gaussian Elimination, for example, AR deduced that b  =  d from the three input equalities. To handle geometric constants such as ‘0.5 pi’ or ‘5:12’, we included ‘pi’ and ‘1’ as default variables to all coefficient matrices.

Deductive database implementation

Unlike the original implementation of DD, we use a graph data structure to capture the symmetries of geometry, rather than using strings of canonical forms. With a graph data structure, we captured not only the symmetrical permutations of function arguments but also the transitivity of equality, collinearity and concyclicity. This graph data structure bakes into itself some deduction rules explicitly stated in the geometric rule list used in DD. These deduction rules from the original list are therefore not used anywhere in exploration but implicitly used and explicitly spelled out on-demand when the final proof is serialized into text.

Traceback to find minimal proofs

Each deduction step needs to be coupled with a traceback algorithm, which returns the minimal set of immediate ancestor statements that is necessary to deduce the conclusion statement of the step. This is the core building block for extracting proof graphs and minimal premises described in the main text. A minimal-premise-extraction algorithm is necessary to avoid superfluous auxiliary constructions that contribute to the proof through unnecessary transitivity. For example, ‘ a  =  b ’ and ‘ b  =  c ’ might not be necessary if ‘ a  =  c ’ can be obtained directly through other reasoning chains.

Traceback for geometric-rule deduction

To do this, we record the equality transitivity graph. For example, if ‘ a  =  b ’, ‘ b  =  c ’, ‘ c  =  d ’ and ‘ a  =  d ’ are deduced, which results in nodes a , b , c and d being connected to the same ‘equality node’ e , we maintain a graph within e that has edges [( a ,  b ), ( b ,  c ), ( c ,  d ), ( a ,  d )]. This allows the traceback algorithm to perform a breadth-first search to find the shortest path of transitivity of equality between any pair of variables among a , b , c and d . For collinearity and concyclicity, however, the representation is more complex. In these cases, hypergraphs G ( V ,  E ) with 3-edges or 4-edges are used as the equality transitivity graph. The traceback is now equivalent to finding a minimum spanning tree (denoted MST in the following equation) for the target set S of nodes (three collinear nodes or four concyclic nodes) whose weight is the cardinality of the union of its hyperedges e ′:

Such optimization is NP-hard, as it is a reduction from the decision version of vertex cover. We simply use a greedy algorithm in this case to find a best-effort minimum spanning tree.

Traceback for algebraic deduction

Traceback through Gaussian elimination can be done by recognizing that it is equivalent to a mixed integer linear programming problem. Given the coefficient matrix of input equations A constructed as described in the previous sections and a target equation with coefficients vector b   ∈   R N , we determine the minimal set of premises for b by defining non-negative integer decision vectors x ,  y   ∈   Z M and solve the following mixed-integer linear programming problem:

The minimum set of immediate parent nodes for the equality represented by b will be the i th equations ( i th rows in A ) whose corresponding decision value ( x i  −  y i ) is non-zero.

Integrating DD and AR

DD and AR are applied alternately to expand their joint deduction closure. The output of DD, which consists of new statements deduced with deductive rules, is fed into AR and vice versa. For example, if DD deduced ‘AB is parallel to CD’, the slopes of lines AB and CD will be updated to be equal variables in AR’s coefficient matrix A , defined in the ‘Algebraic reasoning’ section. Namely, a new row will be added to A with ‘1’ at the column corresponding to the variable slope(AB) and ‘−1’ at the column of slope(CD). Gaussian elimination and mixed-integer linear programming is run again as AR executes, producing new equalities as inputs to the next iteration of DD. This loop repeats until the joint deduction closure stops expanding. Both DD and AR are deterministic processes that only depend on the theorem premises, therefore they do not require any design choices in their implementation.

Proof pruning

Although the set of immediate ancestors to any node is minimal, this does not guarantee that the fully traced back dependency subgraph G ( N ) and the necessary premise P are minimal. Here we define minimality to be the property that G ( N ) and P cannot be further pruned without losing conclusion reachability. Without minimality, we obtained many synthetic proofs with vacuous auxiliary constructions, having shallow relation to the actual proof and can be entirely discarded. To solve this, we perform exhaustive trial and error, discarding each subset of the auxiliary points and rerunning DD + AR on the smaller subset of premises to verify goal reachability. At the end, we return the minimum proof obtainable across all trials. This proof-pruning procedure is done both during synthetic data generation and after each successful proof search during test time.

Parallelized data generation and deduplication

We run our synthetic-data-generation process on a large number of parallel CPU workers, each seeded with a different random seed to reduce duplications. After running this process on 100,000 CPU workers for 72 h, we obtained roughly 500 million synthetic proof examples. We reformat the proof statements to their canonical form (for example, sorting arguments of individual terms and sorting terms within the same proof step, etc.) to avoid shallow deduplication against itself and against the test set. At the end, we obtain 100 million unique theorem–proof examples. A total of 9 million examples involves at least one auxiliary construction. We find no IMO-AG-30 problems in the synthetic data. On the set of geometry problems collected in JGEX 17 , which consists mainly of problems with moderate difficulty and well-known theorems, we find nearly 20 problems in the synthetic data. This suggests that the training data covered a fair amount of common knowledge in geometry, but the space of more sophisticated theorems is still much larger.

Language model architecture and training

We use the Meliad library 35 for transformer training with its base settings. The transformer has 12 layers, embedding dimension of 1,024, eight heads of attention and an inter-attention dense layer of dimension 4,096 with ReLU activation. Overall, the transformer has 151 million parameters, excluding embedding layers at its input and output heads. Our customized tokenizer is trained with ‘word’ mode using SentencePiece 36 and has a vocabulary size of 757. We limit the maximum context length to 1,024 tokens and use T5-style relative position embedding 37 . Sequence packing 38 , 39 is also used because more than 90% of our sequences are under 200 in length. During training, a dropout 40 rate of 5% is applied pre-attention and post-dense. A 4 × 4 slice of TPUv3 (ref.  41 ) is used as its hardware accelerator. For pretraining, we train the transformer with a batch size of 16 per core and a cosine learning-rate schedule that decays from 0.01 to 0.001 in 10,000,000 steps. For fine-tuning, we maintain the final learning rate of 0.001 for another 1,000,000 steps. For the set-up with no pretraining, we decay the learning rate from 0.01 to 0.001 in 1,000,000 steps. We do not perform any hyperparameter tuning. These hyperparameter values are either selected to be a large round number (training steps) or are provided by default in the Meliad codebase.

Parallelized proof search

Because the language model decoding process returns k different sequences describing k alternative auxiliary constructions, we perform a beam search over these k options, using the score of each beam as its value function. This set-up is highly parallelizable across beams, allowing substantial speed-up when there are parallel computational resources. In our experiments, we use a beam size of k  = 512, the maximum number of iterations is 16 and the branching factor for each node, that is, the decoding batch size, is 32. This is the maximum inference-time batch size that can fit in the memory of a GPU V100 for our transformer size. Scaling up these factors to examine a larger fraction of the search space might improve AlphaGeometry results even further.

For each problem, we used a pool of four GPU workers, each hosting a copy of the transformer language model to divide the work between alternative beams, and a pool of 10,000 CPU workers to host the symbolic solvers, shared across all beams across all 30 problems. This way, a problem that terminates early can contribute its share of computing power to longer-running problems. We record the running time of the symbolic solver on each individual problem, which—by design—stays roughly constant across all beams. We use this and the language model decoding speed to infer the necessary parallelism needed for each problem, in isolation, to stay under different time limits at the IMO in Extended Data Fig. 1 .

The effect of data and search

We trained AlphaGeometry on smaller fractions of the original training data (20%, 40%, 60% and 80%) and found that, even at 20% of training data, AlphaGeometry still solves 21 problems, more than the strongest baseline (DD + AR + human-designed heuristics) with 18 problems solved, as shown in Extended Data Fig. 6a . To study the effect of beam search on top of the language model, we reduced the beam size and search depth separately during proof search and reported the results in Extended Data Fig. 6c,d . We find that, with a beam size of 8, that is, a 64 times reduction from the original beam size of 512, AlphaGeometry still solves 21 problems. A similar result of 21 problems can be obtained by reducing the search depth from 16 to only two, while keeping the beam size constant at 512.

Evaluation on a larger test set

We evaluated AlphaGeometry and other baselines on a larger test set of 231 geometry problems, curated in ref.  17 . This set covers a wider range of sources outside IMO competitions: textbook examples and exercises, regional olympiads and famous geometry theorems; some are even more complex than typical IMO problems, such as the five circles theorem, Morley’s theorem or Sawayama and Thébault’s theorem. The results are reported in Extended Data Fig. 6b . The overall rankings of different approaches remained the same as in Table 1 , with AlphaGeometry solving almost all problems (98.7%). The strongest baseline DD + AR + human-designed heuristics solves 92.2%, whereas the previous state of the art solves 75%.

AlphaGeometry framework and applicability to other domains

The strength of AlphaGeometry’s neuro-symbolic set-up lies in its ability to generate auxiliary constructions, which is an important ingredient across many mathematical domains. In Extended Data Table 3 , we give examples in four other mathematical domains in which coming up with auxiliary constructions is key to the solution. In Extended Data Table 4 , we give a line-by-line comparison of a geometry proof and an inequality proof for the IMO 1964 Problem 2, highlighting how they both fit into the same framework.

Our paper shows that language models can learn to come up with auxiliary constructions from synthetic data, in which problem statements and auxiliary constructions are randomly generated together and then separated using the traceback algorithm to identify the dependency difference. Concretely, the AlphaGeometry framework requires the following ingredients:

An implementation of the domain’s objects and definitions.

A random premise sampler.

The symbolic engine(s) that operate within the implementation (1).

A traceback procedure for the symbolic engine.

Using these four ingredients and the algorithm described in the main text, one can generate synthetic data for any target domain. As shown in our paper, there are non-trivial engineering challenges in building each ingredient. For example, current formalizations of combinatorics are very nascent, posing challenges to (1) and (2). Also, building powerful symbolic engines for different domains requires deep domain expertise, posing challenges to (3) and (4). We consider applying this framework to a wider scope as future work and look forward to further innovations that tackle these challenges.

Transformer in theorem proving

Research in automated theorem proving has a long history dating back to the 1950s (refs.  6 , 42 , 43 ), resulting in highly optimized first-order logic solvers such as E (ref.  44 ) or Vampire 45 . In the 2010s, deep learning matured as a new powerful tool for automated theorem proving, demonstrating great successes in premise selection and proof guidance 46 , 47 , 48 , 49 , as well as SAT solving 50 . On the other hand, transformer 18 exhibits outstanding reasoning capabilities across a variety of tasks 51 , 52 , 53 . The first success in applying transformer language models to theorem proving is GPT-f (ref.  15 ). Its follow up extensions 2 , 16 further developed this direction, allowing machines to solve some olympiad-level problems for the first time. Innovation in the proof-search algorithm and online training 3 also improves transformer-based methods, solving a total of ten (adapted) IMO problems in algebra and number theory. These advances, however, are predicated on a substantial amount of human proof examples and standalone problem statements designed and curated by humans.

Geometry theorem proving

Geometry theorem proving evolves in an entirely separate space. Its literature is divided into two branches, one of computer algebra methods and one of search methods. The former is largely considered solved since the introduction of Wu’s method 21 , which can theoretically decide the truth value of any geometrical statement of equality type, building on specialized algebraic tools introduced in earlier works 54 , 55 . Even though computer algebra has strong theoretical guarantees, its performance can be limited in practice owing to their large time and space complexity 56 . Further, the methodology of computer algebra is not of interest to AI research, which instead seeks to prove theorems using search methods, a more human-like and general-purpose process.

Search methods also started as early as the 1950s (refs.  6 , 7 ) and continued to develop throughout the twentieth century 57 , 58 , 59 , 60 . With the introduction of DD 10 , 17 , area methods 61 and full-angle methods 30 , geometry solvers use higher-level deduction rules than Tarski’s or Hilbert’s axioms and are able to prove a larger number of more complex theorems than those operating in formal languages. Geometry theorem proving of today, however, is still relying on human-designed heuristics for auxiliary constructions 10 , 11 , 12 , 13 , 14 . Geometry theorem proving falls behind the recent advances made by machine learning because its presence in formal mathematical libraries such as Lean 31 or Isabelle 62 is extremely limited.

Synthetic data in theorem proving

Synthetic data has long been recognized and used as an important ingredient in theorem proving 63 , 64 , 65 , 66 . State-of-the-art machine learning methods make use of expert iteration to generate a curriculum of synthetic proofs 2 , 3 , 15 . Their methods, however, only generate synthetic proofs for a fixed set of predefined problems, designed and selected by humans. Our method, on the other hand, generates both synthetic problems and proofs entirely from scratch. Aygun et al. 67 similarly generated synthetic proofs with hindsight experience replay 68 , providing a smooth range of theorem difficulty to aid learning similar to our work. AlphaGeometry, however, is not trained on existing conjectures curated by humans and does not learn from proof attempts on the target theorems. Their approach is thus orthogonal and can be used to further improve AlphaGeometry. Most similar to our work is Firoiu et al. 69 , whose method uses a forward proposer to generate synthetic data by depth-first exploration and trains a neural network purely on these synthetic data. Our work, on the other hand, uses breadth-first exploration, necessary to obtain the minimal proofs and premises, and uses a traceback algorithm to identify auxiliary constructions, thus introducing new symbols and hypotheses that the forward proposer cannot propose.

Data availability

The data supporting the findings of this work are available in the Extended Data and the Supplementary Information .  Source data are provided with this paper.

Code availability

Our code and model checkpoint is available at https://github.com/google-deepmind/alphageometry .

Change history

23 february 2024.

A Correction to this paper has been published: https://doi.org/10.1038/s41586-024-07115-7

Zheng, K., Han, J. M. & Polu, S. MiniF2F: a cross-system benchmark for formal olympiad-level mathematics. Preprint at https://doi.org/10.48550/arXiv.2109.00110 (2022).

Polu, S. et al. Formal mathematics statement curriculum learning. Preprint at https://doi.org/10.48550/arXiv.2202.01344 (2023).

Lample, G. et al. Hypertree proof search for neural theorem proving. Adv. Neural Inf. Process. Syst. 35 , 26337–26349 (2022).

Google Scholar  

Potapov, A. et al. in Proc. 13th International Conference on Artificial General Intelligence, AGI 2020 (eds Goertzel, B., Panov, A., Potapov, A. & Yampolskiy, R.) 279–289 (Springer, 2020).

Marić, F. Formalizing IMO problems and solutions in Isabelle/HOL. Preprint at https://arxiv.org/abs/2010.16015 (2020).

Gelernter, H. L. in Proc. First International Conference on Information Processing (IFIP) 273–281 (UNESCO, 1959).

Gelernter, H., Hansen, J. R. & Loveland, D. W. in Papers presented at the May 3–5, 1960, western joint IRE-AIEE-ACM computer conference 143–149 (ACM, 1960).

Harrison, J., Urban, J. & Wiedijk, F. in Handbook of the History of Logic Vol. 9 (ed. Siekmann, J. H.) 135–214 (North Holland, 2014).

van Doorn, F., Ebner, G. & Lewis, R. Y. in Proc. 13th International Conference on Intelligent Computer Mathematics, CICM 2020 (eds Benzmüller, C. & Miller, B.) 251–267 (Springer, 2020).

Chou, S. C., Gao, X. S. & Zhang, J. Z. A deductive database approach to automated geometry theorem proving and discovering. J. Autom. Reason. 25 , 219–246 (2000).

Article   MathSciNet   Google Scholar  

Matsuda, N. & Vanlehn, K. GRAMY: a geometry theorem prover capable of construction. J. Autom. Reason. 32 , 3–33 (2004).

Wang, K. & Su, Z. in Proc. Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) (ACM, 2015).

Gao, X. S. & Lin, Q. in Proc. Automated Deduction in Geometry: 4th International Workshop, ADG 2002 (ed. Winkler, F.) 44–66 (Springer, 2004).

Zhou, M. & Yu, X. in Proc. 2nd International Conference on Artificial Intelligence in Education: Emerging Technologies, Models and Applications, AIET 2021 (eds Cheng, E. C. K., Koul, R. B., Wang, T. & Yu, X.) 151–161 (Springer, 2022).

Polu, S. & Sutskever, I. Generative language modeling for automated theorem proving. Preprint at https://arxiv.org/abs/2009.03393 (2020).

Han, J. M., Rute, J., Wu, Y., Ayers, E. W., & Polu, S. Proof artifact co-training for theorem proving with language models. Preprint at https://doi.org/10.48550/arXiv.2102.06203 (2022).

Ye, Z., Chou, S. C. & Gao, X. S. in Proc. Automated Deduction in Geometry: 7th International Workshop, ADG 2008 (eds Sturm, T. & Zengler, C.) 189–195 (Springer, 2011).

Vaswani, A. et al. Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017).

Olšák, M. in Proc. 7th International Conference on Mathematical Software – ICMS 2020 (eds Bigatti, A., Carette, J., Davenport, J., Joswig, M. & de Wolff, T.) 263–271 (Springer, 2020).

Bose, N. K. in Multidimensional Systems Theory and Applications 89–127 (Springer, 1995).

Wu, W.-T. On the decision problem and the mechanization of theorem-proving in elementary geometry. Sci. Sin. 21 , 159–172 (1978).

MathSciNet   Google Scholar  

Radford, A., Narasimhan, K., Salimans, T. & Sutskever, I. Improving language understanding by generative pre-training. Preprint at https://paperswithcode.com/paper/improving-language-understanding-by (2018).

Radford, A. et al. Better language models and their implications. OpenAI Blog https://openai.com/blog/better-language-models (2019).

Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33 , 1877–1901 (2020).

Bubeck, S. et al. Sparks of artificial general intelligence: early experiments with GPT-4. Preprint at https://arxiv.org/abs/2303.12712 (2023).

Lewkowycz, A. et al. Solving quantitative reasoning problems with language models. Adv. Neural Inf. Process. Syst. 35 , 3843–3857 (2022).

Liang, P. et al. Holistic evaluation of language models. Transact. Mach. Learn. Res. https://doi.org/10.48550/arXiv.2211.09110 (2023).

Srivastava, A. et al. Beyond the imitation game: quantifying and extrapolating the capabilities of language models. Transact. Mach. Learn. Res. https://doi.org/10.48550/arXiv.2206.04615 (2023).

Wei, J. et al. Emergent abilities of large language models. Transact. Mach. Learn. Res. https://doi.org/10.48550/arXiv.2206.07682 (2022).

Chou, S. C., Gao, X. S. & Zhang, J. Z. Automated generation of readable proofs with geometric invariants: II. Theorem proving with full-angles. J. Autom. Reason. 17 , 349–370 (1996).

de Moura, L. & Ullrich, S. in Proc. 28th International Conference on Automated Deduction, CADE 28 (eds Platzer, A. & Sutcliffe, G.) 625–635 (Springer, 2021).

Krueger, R., Han, J. M. & Selsam, D. in Proc. 28th International Conference on Automated Deduction, CADE 28 (eds Platzer, A. & Sutcliffe, G.) 577–588 (Springer, 2021).

de Moura, L. & Bjørner, N. in Proc. 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2008 (eds Ramakrishnan, C. R. & Rehof, J.) 337–340 (Springer, 2008).

Todd, P. A method for the automated discovery of angle theorems. EPTCS 352 , 148–155 (2021).

Hutchins, D., Rabe, M., Wu, Y., Schlag, I. & Staats, C. Meliad. Github https://github.com/google-research/meliad (2022).

Kudo, T. & Richardson, J. SentencePiece: a simple and language independent subword tokenizer and detokenizer for neural text processing. Preprint at https://arxiv.org/abs/1808.06226 (2018).

Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 , 5485–5551 (2020).

Kosec, M., Fu, S. & Krell, M. M. Packing: towards 2x NLP BERT acceleration. Preprint at https://openreview.net/forum?id=3_MUAtqR0aA (2021).

Krell, M. M., Kosec, M., Perez, S. P. & Iyer, M., Fitzgibbon A. W. Efficient sequence packing without cross-contamination: accelerating large language models without impacting performance. Preprint at https://arxiv.org/abs/2107.02027 (2022).

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 , 1929–1958 (2014).

Norrie, T. et al. The design process for Google’s training chips: TPUv2 and TPUv3. IEEE Micro. 41 , 56–63 (2021) Feb 9.

Article   Google Scholar  

Gilmore, P. C. A proof method for quantification theory: its justification and realization. IBM J. Res. Dev. 4 , 28–35 (1960).

Davis, M. & Putnam, H. A computing procedure for quantification theory. J. ACM. 7 , 201–215 (1960).

Schulz, S. E – a brainiac theorem prover. AI Commun. 15 , 111–126 (2002).

Riazanov, A. & Voronkov, A. in Proc. First International Joint Conference on Automated Reasoning, IJCAR 2001 (eds Goré, R., Leitsch, A. & Nipkow, T.) 376–380 (Springer, 2001).

Irving, G. et al. DeepMath - deep sequence models for premise selection. Adv. Neural Inf. Process. Syst. https://doi.org/10.48550/arXiv.1606.04442 (2016).

Wang, M., Tang, Y., Wang, J. & Deng, J. Premise selection for theorem proving by deep graph embedding. Adv. Neural Inf. Process. Syst. https://doi.org/10.48550/arXiv.1709.09994 (2017).

Loos, S., Irving, G., Szegedy, C. & Kaliszyk, C. Deep network guided proof search. Preprint at https://arxiv.org/abs/1701.06972 (2017).

Bansal, K., Loos, S., Rabe, M., Szegedy, C. & Wilcox S. in Proc. 36th International Conference on Machine Learning 454–463 (PMLR, 2019).

Selsam, D. et al. Learning a SAT solver from single-bit supervision. Preprint at https://doi.org/10.48550/arXiv.1802.03685 (2019).

Saxton, D., Grefenstette, E., Hill, F. & Kohli, P. Analysing mathematical reasoning abilities of neural models. Preprint at https://doi.org/10.48550/arXiv.1904.01557 (2019).

Lample, G. & Charton F. Deep learning for symbolic mathematics. Preprint at https://doi.org/10.48550/arXiv.1912.01412 (2019).

Charton, F., Hayat, A. & Lample, G. Learning advanced mathematical computations from examples. Preprint at https://doi.org/10.48550/arXiv.2006.06462 (2021).

Collins, G. E. in Proc. 2nd GI Conference on Automata Theory and Formal Languages (ed. Barkhage, H.) 134–183 (Springer, 1975).

Ritt, J. F. Differential Algebra (Colloquium Publications, 1950).

Chou, S. C. Proving Elementary Geometry Theorems Using Wu’s Algorithm . Doctoral dissertation, Univ. Texas at Austin (1985).

Nevins, A. J. Plane geometry theorem proving using forward chaining. Artif. Intell. 6 , 1–23 (1975).

Coelho, H. & Pereira, L. M. Automated reasoning in geometry theorem proving with Prolog. J. Autom. Reason. 2 , 329–390 (1986).

Quaife, A. Automated development of Tarski’s geometry. J. Autom. Reason. 5 , 97–118 (1989).

McCharen, J. D., Overbeek, R. A. & Lawrence, T. in The Collected Works of Larry Wos 166–196 (2000).

Chou, S. C., Gao, X. S. & Zhang, J. Machine Proofs in Geometry: Automated Production of Readable Proofs for Geometry Theorems (World Scientific, 1994).

Paulson, L. C. (ed.) Isabelle: A Generic Theorem Prover (Springer, 1994).

Wu, Y., Jiang, A. Q., Ba, J. & Grosse, R. INT: an inequality benchmark for evaluating generalization in theorem proving. Preprint at https://doi.org/10.48550/arXiv.2007.02924 (2021).

Zombori, Z., Csiszárik, A., Michalewski, H., Kaliszyk, C. & Urban, J. in Proc. 30th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (eds Das, A. & Negri, S.) 167–186 (Springer, 2021).

Fawzi, A., Malinowski, M., Fawzi, H., Fawzi, O. Learning dynamic polynomial proofs. Adv. Neural Inf. Process. Syst. https://doi.org/10.48550/arXiv.1906.01681 (2019).

Wang, M. & Deng, J. Learning to prove theorems by learning to generate theorems. Adv. Neural Inf. Process. Syst. 33 , 18146–18157 (2020).

Aygün, E. et al. in Proc. 39th International Conference on Machine Learning 1198–1210 (PMLR, 2022).

Andrychowicz, M. et al. Hindsight experience replay. Adv. Neural Inf. Process. Syst. https://doi.org/10.48550/arXiv.1707.01495 (2017).

Firoiu, V. et al. Training a first-order theorem prover from synthetic data. Preprint at https://doi.org/10.48550/arXiv.2103.03798 (2021).

Download references

Acknowledgements

This project is a collaboration between the Google Brain team and the Computer Science Department of New York University. We thank R. A. Saurous, D. Zhou, C. Szegedy, D. Hutchins, T. Kipf, H. Pham, P. Veličković, E. Lockhart, D. Dwibedi, K. Cho, L. Pinto, A. Canziani, T. Wies, H. He’s research group, E. Chen (the USA’s IMO team coach), M. Olsak and P. Bak.

Author information

Authors and affiliations.

Google Deepmind, Mountain View, CA, USA

Trieu H. Trinh, Yuhuai Wu, Quoc V. Le & Thang Luong

Computer Science Department, New York University, New York, NY, USA

Trieu H. Trinh & He He

You can also search for this author in PubMed   Google Scholar

Contributions

T.H.T. conceived the project, built the codebase, carried out experiments, requested manual evaluation from experts and drafted the manuscript. Y.W. advocated for the neuro-symbolic setting and advised on data/training/codebase choices. Q.V.L. advised on scientific methodology and revised the manuscript. H.H. advised on scientific methodology, experimental set-ups and the manuscript. T.L. is the PI of the project, advised on model designs/implementations/experiments and helped with manuscript structure and writing.

Corresponding authors

Correspondence to Trieu H. Trinh or Thang Luong .

Ethics declarations

Competing interests.

The following US patent is related to this work: “Training language model neural networks using synthetic reasoning data”, filed in the United States Patent and Trademark Office (USPTO) on 1 May 2023 as application no. 63/499,469.

Peer review

Peer review information.

Nature thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 the minimum number of parallel cpu workers to solve all 25 problems and stay under the time limit, given four parallel copies of the gpu v100-accelerated language model..

Each problem has a different running time resulting from their unique size of the deduction closure. We observed that running time does not correlate with the difficulty of the problem. For example, IMO 2019 P6 is much harder than IMO 2008 P1a, yet it requires far less parallelization to reach a solution within IMO time limits.

Extended Data Fig. 2 Side-by-side comparison of AlphaGeometry proof versus human proof on the translated IMO 2004 P1.

Both the AlphaGeometry and human solutions recognize the axis of symmetry between M and N through O. AlphaGeometry constructs point K to materialize this axis, whereas humans simply use the existing point R for the same purpose. This is a case in which proof pruning itself cannot remove K and a sign of similar redundancy in our synthetic data. To prove five-point concyclicity, AlphaGeometry outputs very lengthy, low-level steps, whereas humans use a high-level insight (OR is the symmetrical axis of both LN and AM) to obtain a broad set of conclusions all at once. For algebraic deductions, AlphaGeometry cannot flesh out its intermediate derivations, which is implicitly carried out by Gaussian elimination, therefore leading to low readability. Overall, this comparison points to the use of higher-level tools to improve the synthetic data, proof search and readability of AlphaGeometry. Note that in the original IMO 2004 P1, the point P is proven to be between B and C. The generalized version needs further contraints on the position of O to satisfy this betweenness requirement.

Extended Data Fig. 3 Side-by-side comparison of human proof and AlphaGeometry proof for the IMO 2000 P6.

This is a harder problem (average human score = 1.05/7), with a large number of objects in the problem statements, resulting in a very crowded diagram. Left, the human solution uses complex numbers. With a well-chosen coordinate system, the problem is greatly simplified and a solution follows naturally through algebraic manipulation. Right, AlphaGeometry solution involves two auxiliary constructions and more than 100 deduction steps, with many low-level steps that are extremely tedious to a human reader. This is a case in which the search-based solution is much less readable and much less intuitive than coordinate bashing. A more structural organization, that is, a high-level proof outline, can improve readability of the AlphaGeometry solution substantially. Again, this suggests building into AlphaGeometry many higher-level deduction rules to encapsulate large groups of low-level deductions into fewer proof steps.

Extended Data Fig. 4 Side-by-side comparison of human proof and AlphaGeometry proof for the IMO 2019 P2.

This is one out of five unsolved problems by AlphaGeometry. Left, the human solution uses both auxiliary constructions and barycentric coordinates. With a well-chosen coordinate system, a solution becomes available through advanced algebraic manipulation. Right, AlphaGeometry solution when provided with the ground-truth auxiliary construction for a synthetic proof. This auxiliary construction can be found quickly with the knowledge of Reim’s theorem, which is not included in the deduction rule list used by the symbolic engine during synthetic data generation. Including such high-level theorems into the synthetic data generation can greatly improve the coverage of synthetic data and thus improve auxiliary construction capability. Further, higher-level steps using Reim’s theorem also cut down the current proof length by a factor of 3.

Extended Data Fig. 5 Human proof for the IMO 2008 P6.

This is an unsolved problem by AlphaGeometry and also the hardest one among all 30 problems, with an average human score of only 0.28/7. This human proof uses four auxiliary constructions (diameters of circles W1 and W2) and high-level theorems such as the Pitot theorem and the notion of homothety. These high-level concepts are not available to our current version of the symbolic deduction engine both during synthetic data generation and proof search. Supplying AlphaGeometry with the auxiliary constructions used in this human proof also does not yield any solution. There is also no guarantee that a synthetic solution exists for AlphaGeometry, across all possible auxiliary constructions, without enhancing its symbolic deduction with more powerful rules. Again, this suggests that enhancing the symbolic engine with more powerful tools that IMO contestants are trained to use can improve both the synthetic data and the test-time performance of AlphaGeometry.

Extended Data Fig. 6 Analysis of AlphaGeometry performance under changes made to its training and testing.

a , The effect of reducing training data on AlphaGeometry performance. At 20% of training data, AlphaGeometry still solves 21 problems, outperforming all other baselines. b , Evaluation on a larger set of 231 geometry problems, covering a diverse range of sources outside IMO competitions. The rankings of different machine solvers stays the same as in Table 1 , with AlphaGeometry solving almost all problems. c , The effect of reducing beam size during test time on AlphaGeometry performance. At beam size 8, that is, a 64 times reduction from its full setting, AlphaGeometry still solves 21 problems, outperforming all other baselines. d , The effect of reducing search depth on AlphaGeometry performance. At depth 2, AlphaGeometry still solves 21 problems, outperforming all other baselines.

Supplementary information

Supplementary information.

Supplementary Sections 1 and 2. Section 1 contains GPT-4 prompting details and includes prompting for two scenarios: (1) GPT-4 producing full proofs in natural language and (2) GPT-4 interfaces with DD + AR. Section 2 contains AlphaGeometry solutions for problems in IMO-AG-30. It lists the 30 problem statements, their diagrams to aid understanding and AlphaGeometry solution (if any) sequentially.

Source data

Source data fig. 2, source data fig. 4, source data fig. 6, source data extended data fig. 1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Trinh, T.H., Wu, Y., Le, Q.V. et al. Solving olympiad geometry without human demonstrations. Nature 625 , 476–482 (2024). https://doi.org/10.1038/s41586-023-06747-5

Download citation

Received : 30 April 2023

Accepted : 13 October 2023

Published : 17 January 2024

Issue Date : 18 January 2024

DOI : https://doi.org/10.1038/s41586-023-06747-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Next-generation ai for connectomics.

  • Michał Januszewski

Nature Methods (2024)

AI-driven research in pure mathematics and theoretical physics

  • Yang-Hui He

Nature Reviews Physics (2024)

A Survey of LLM Datasets: From Autoregressive Model to AI Chatbot

  • Xin-Jian Ma

Journal of Computer Science and Technology (2024)

The disruptive AlphaGeometry: is it the beginning of the end of mathematics education?

  • Quan-Hoang Vuong
  • Manh-Tung Ho

AI & SOCIETY (2024)

What is generative in generative artificial intelligence? A design-based perspective

  • Antoine Bordas
  • Pascal Le Masson
  • Benoit Weil

Research in Engineering Design (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

art of problem solving olympiad geometry

Find anything you save across the site in your account

Richard Rusczyk’s Worldwide Math Camp

Illustrated portrait of a man.

At the start of a YouTube video titled “ Art of Problem Solving: Least Common Multiple ,” Richard Rusczyk invites viewers to play a game. Every twenty-four seconds, we’re supposed to clap; every forty-five seconds, we’re supposed to jump. The challenge is to keep going until we clap and jump at the same time. Rusczyk, who is dark-haired, clean-shaven, and boyish, gestures to a digital timer that appears in a corner of the screen. He starts the clock, stares at it, and fidgets. “Um, how long is this gonna take?” he asks, rolling his eyes like a teen-ager. “I hate waiting.”

When the timer hits twenty-four seconds, Rusczyk claps. When it reaches forty-five, he jumps. Meanwhile, on a digital blackboard, he starts trying to figure out when the clap and the jump will coincide. Over the course of a continuous seven-minute take, Rusczyk jumps and claps at the right times while scribbling equations. First, he tries writing out multiples of twenty-four, but gets bored. Then he tries expressing twenty-four and forty-five as products of their prime-number components: twenty-four is 2³ x 3¹, and forty-five is 3² x 5¹. “This is gonna work,” he says, clapping. Just as he concludes that it will take three hundred and sixty seconds for the clap and jump to converge, he claps and leaps simultaneously; as it happens, the timer has reached three hundred and sixty. It’s an exuberant, precise performance intended for middle-school kids, or younger ones, who are capable of doing advanced math.

Rusczyk, who lives near San Diego, founded Art of Problem Solving—or AoPS—eighteen years ago as a resource for budding math prodigies. Exceptionally gifted young math students often find classroom math unbearably easy and tedious; their parents can have difficulty obtaining sufficiently stimulating lessons. By offering online instruction in math that’s more complex than what’s in standard gifted-and-talented programs, AoPS has become a lifeline for math whizzes. Its free online forums also serve as a vital social network, allowing math prodigies to connect with kindred spirits every day.

Rusczyk began posting free videos more than a decade ago; he ad-libs without a written script. He made “Least Common Multiple,” with its quirky dramatization of a humdrum numerical concept, in 2011, at age forty. Some of his videos have garnered hundreds of thousands of views; occasionally, they feature his alter ego, a gravelly voiced character in dark shades and a black hoodie. Onscreen, Rusczyk conveys a playful, experimental fearlessness that sweeps young learners along. “It’s a slightly intangible quality that some people have, and he’s got it in spades,” the mathematician Sam Vandervelde, who is the head of Proof School, a private, math-centric liberal arts academy in San Francisco, told me.

Kristen Chandler, a former math teacher who is the executive director of MathCounts , a nonprofit that runs a popular middle-school math contest series, told me that Rusczyk is “a rock star at our competitions.” (Along with Raytheon Technologies, the Department of Defense STEM , and others, AoPS is a sponsor of the MathCounts program.) Pre-pandemic, Rusczyk attended the MathCounts national finals each May as an invited speaker; Chandler recalled how contestants and parents flocked to get his autograph and take selfies with him. One competitor asked Rusczyk to sign his forehead with a marker.

For years, AoPS grew gradually. It released print textbooks, Math Olympiad-prep materials, and an accredited online curriculum, including a free adaptive learning system containing thousands of hard math problems. In 2012, it began rolling out Beast Academy, an elementary-school curriculum in which advanced mathematical concepts are communicated to young math geeks by wisecracking comic-book monsters. It also opened ten brick-and-mortar learning centers across the country. By 2019, about thirty-six thousand math students from around the world were using its paid online curriculum or in-person courses, and tens of thousands more were consulting its textbooks for independent study.

In the spring of 2020, when schools shuttered , the company’s Web site traffic jumped five- to six-fold, and enrollments doubled. AoPS’s hundred employees began telecommuting, except for Rusczyk and four warehouse workers. On nights or weekends, Rusczyk and his wife, Vanessa, would go into the empty company headquarters—a two-story office building in the suburb of Rancho Bernardo—to help fill book orders. One Sunday when he was in the office, we connected over Zoom. He was dressed in a short-sleeved blue plaid shirt. Five feet seven, with cropped hair, Rusczyk has a quick, self-deprecating wit and sometimes laughs like a kid, almost doubled over. On a brief tour, he showed me stacks of book boxes in the warehouse and framed illustrations of Beast Academy monsters. In a dim hallway, the overhead fluorescent lighting had stopped working, except for one eerily flickering panel. “This is where the zombies are going to get me in the zombie apocalypse,” he said, grinning. (He read a lot of sci-fi and fantasy as a child.)

Now fifty, Rusczyk bonds easily with math-obsessed kids because he used to be one of them. Growing up, he was fast with calculations and showed a brilliant, intuitive grasp of geometric relationships. He had a competitive streak, and won many math competitions. But, at the same time, he experienced deflating setbacks that helped dissuade him from the academic pursuit of mathematics. He loved math—it had taught him about resilience, creativity, and the joys of finding one’s tribe. Still, he faced a conundrum: If you’re a math prodigy who doesn’t want to become a mathematician, what do you do with your life? Art of Problem Solving was his solution.

Rusczyk was born in Idaho Falls. He and his younger sister attended elementary schools in half a dozen states as their father, a U.S. naval officer and nuclear engineer, moved from one base to the next. Small but naturally athletic, Rusczyk played basketball and spouted pro baseball statistics—these “got him into numbers,” his mother, Claire, a former grade-school teacher, told me. In 1983, Claire read a newspaper article about the launch of the MathCounts program. Rusczyk, who was in seventh grade, signed up and did well; he loved being surrounded by dozens of teens who got a kick out of wrestling with numbers. Two years later, after the family had settled in Decatur, Alabama, he placed twenty-fourth at the MathCounts national finals.

Rusczyk became the star of his high school’s math team, which travelled to competitions around the Southeast. He also participated individually in the American Mathematics Competitions, a rigorous series organized by the Mathematical Association of America (M.A.A.). The contests built up to the U.S.A. Mathematical Olympiad, which back then was a five-question, three-and-a-half-hour examination. Rusczyk played tennis and ran cross-country, but he relished math and the company of his math buddies even more. His bookshelves were filled with math-contest ribbons and trophies. “I was definitely a trophy hunter,” he said. He spent hours practicing with old math-contest problems in his bedroom.

In June, 1987, after his sophomore year, he was invited to the M.A.A.’s Mathematical Olympiad Summer Program, reserved for those who’d placed in the top tier of the U.S.A. Mathematical Olympiad. The program was an intensive, monthlong math boot camp, held each year at either West Point or the Naval Academy, in Annapolis. (A redesigned program is now hosted by Carnegie Mellon University.) At West Point, Rusczyk was one of two dozen boot campers, nearly all boys. They stayed in spartan dorm rooms and were rousted early by the bugle call of reveille. Largely based on three exams in the first week—each roughly four hours long—six students would be chosen to represent the U.S. at the International Mathematical Olympiad, or I.M.O., in July.

Rusczyk arrived excited, expecting that he would be able to hold his own. On the first day, a professor stood at a blackboard and wrote “Counting” in chalk; the topic—“falling factorials”—was unfamiliar. Within minutes, Rusczyk was bewildered. It quickly became obvious that he wasn’t even close to being the brightest kid in the room. It was an unsettling feeling. Other students absorbed the math like sponges; some were clearly geniuses. Rusczyk couldn’t solve a single problem on the gruelling practice exams. Being outgunned by his cerebral classmates was inspiring but also terrifying. “I shut down by the end of the first week,” he recalled.

Still, the group was friendly, bantering over board games and Ultimate Frisbee. Rusczyk, who had brought his basketball, nimbly dribbled around the other campers. He formed strong friendships, including with Vandervelde, a fellow-Southerner. He noticed that Vandervelde and other top students—among them, the future mathematician and writer Jordan Ellenberg—appeared enthralled with pondering abstract numerical concepts and questions for their own sakes. Rusczyk realized that, for him, the appeal of math lay more in competition and camaraderie.

Rusczyk didn’t make the I.M.O. team; later he learned that a few other students were also struggling. The next summer, he attended the boot camp again, this time in Annapolis, and was still frequently perplexed. Nevertheless, he kept studying; in his senior year of high school, he began working through some mathematical proofs, attaining a more genuine grasp of the concepts. He graduated as valedictorian, was a winner of the U.S.A. Mathematical Olympiad—at that time, eight medals were awarded each year—and returned to the boot camp for a third summer. Although he didn’t qualify for the I.M.O. team that year, either—Vandervelde and Ellenberg did—he was picked as an alternate. He left the camp early after falling ill, ranked as one of the top eight high-school math students in the nation.

Rusczyk went to Princeton, famed for its powerhouse math department. But he was burned out. The boot camps had left him certain that he lacked the creativity to solve the great abstract mysteries of theoretical mathematics. (Paul Zeitz, an emeritus math professor at the University of San Francisco, told me that Rusczyk may have been too hasty in reaching this conclusion: performance in math contests, Zeitz said, has little to do with becoming a superb mathematician.) Rusczyk also doubted that he possessed the patience to devote a lifetime to math research. He decided to major in chemical engineering.

And yet he wasn’t quite ready to leave the math-contest world behind. Soon afterward, for fun, Rusczyk, Vandervelde, and Sandor Lehoczky, a younger Olympiad boot camper, created their own mail-in math contest. They called it the Mandelbrot Competition, named after Benoit Mandelbrot, the father of fractals. The trio ran into an issue: they found that the contest problems they came up with were too hard for the participants. Rusczyk discussed the problem with Lehoczky, who was also at Princeton. They concluded that opportunities to learn advanced math were scarce and unevenly distributed. Many young math enthusiasts didn’t know about competitions and élite summer programs; looking back at their Olympiad boot camp experiences, the pair saw that, although some of the mathletes were unquestionably smarter, others simply had earlier exposure to complex math, or access to university mathematicians, or had attended special schools with a high-octane math-team culture. “We should write a book,” Lehoczky declared; it could help democratize advanced math. The two went on to self-publish a two-volume textbook titled “ The Art of Problem Solving .” The book taught “not facts , but approaches ,” they wrote. “If you find yourself memorizing formulas, you are missing the point.”

In the fall of 1993, Rusczyk—newly married to Vanessa—started a Ph.D. in chemical engineering at Stanford. But research still struck him as unappealing. He dropped out after eight weeks. Meanwhile, orders for the math textbook were trickling in. He drove to local schools, hawking the book and hunting for a job as a math teacher. A small private high school hired him, but it wasn’t the right fit, either: he liked teaching, but it was tough to win over the students who abhorred math. Rusczyk figured that he could reach a thousand keen math students a year with the textbook. That summer, he quit the teaching job, too.

In the mid-nineties, Wall Street was emerging as a place where mathematical minds could excel. Rusczyk was recruited for a job at the hedge fund D. E. Shaw; during his interview, he ran into two math-competition geeks he knew. He enjoyed his time trading bonds, but still wanted to build something of his own. After the markets went sideways in 1998, he quit. The following year, he and Vanessa relocated to San Diego, where they bought a fixer-upper; the house was surrounded by national forest and came with three donkeys. For a while, the couple coasted, repairing the house and planting a garden. They became avid hikers. “If I let him choose the hike, it’s always whatever is the highest, whatever is the longest,” Vanessa told me. One of his hobbies was working on old Math Olympiad problems, which could leave him obsessed and cranky until he solved them. The Internet was still new; Rusczyk did some online math tutoring and began thinking about the possibilities.

In 2003, when he was thirty-one, Rusczyk launched artofproblemsolving.com . He used off-the-shelf forum software to set up a community message board and led interactive classes based on his and Lehoczky’s books. Word spread, and young math brainiacs from around the world joined the forum, sharing nerdy puns, posting intriguing problems, and spurring one another along. Yufei Zhao, an early community member from Canada who competed in the I.M.O. three times and is now a math professor at the Massachusetts Institute of Technology, recalled his routine after getting home from high school: “Logging onto this forum was the first thing I did,” he said.

In the twenty years from 1995 to 2014, teams from the U.S. never managed to rank first at the I.M.O. But since 2015 the U.S. has claimed four first-place victories there—an outcome partly attributable to AoPS. Many variables played a role in those successes—including other math enrichment programs and the tutelage of lead coach Po-Shen Loh—but all the members of those winning U.S. teams were AoPS’s students. They were among the first generation to grow up with access to its curriculum. In learning mathematics, just as with studying piano or playing tennis, the earlier that talented individuals start training, the more they may be able to attain. In Rusczyk’s view, this isn’t just a matter of acquiring mathematical knowledge. The pervasive stereotype of children who are labelled as “geniuses” or “gifted” at math assumes that their brilliance requires little effort; by that definition, a genius shouldn’t struggle to learn. (Rusczyk and many other math educators aren’t fans of those labels.) Rusczyk’s boot camp experiences, however, had prepared him for confronting tough, unfamiliar problems of any kind. By normalizing struggle and failure from an earlier age, AoPS was designed to show math prodigies that it was O.K. to stumble and grow.

When COVID-19 struck , AoPS, working pro bono, built a web platform to host the U.S.A. Mathematical Olympiad and other contests. In lieu of the MathCounts national finals, Chandler and her colleagues unofficially offered their 2020 state competition exam on the AoPS site. The day after the test, Rusczyk and David Patrick, a former math professor who is an AoPS curriculum director, reviewed some of the questions in an AoPS chat room before an audience of more than three thousand online students. Rusczyk moderated the chat from two large monitors at his standing desk at work; the walls around him displayed a letter from Benoit Mandelbrot and two delicately rendered oil paintings, by Vanessa, of white manzanita blossoms and red Indian paintbrush. Typing on his keyboard, he walked through the first problem, about an equilateral triangle. Each time he posted a question, a wave of replies came back; he grinned as the students chimed in. “They’re fast, and they all want to be first,” he told me. While discussing a subsequent problem, he laughed at a student’s message: “I got it before you did, Richard!”

While Patrick reviewed the next set of problems, Rusczyk sipped water from a stainless steel mug. I asked whether he had been like these kids.

“Honestly, we’re building stuff for the thirteen-year-old version of ourselves,” he said. “It turns out these kids are a lot like us. They find the same things neat. They find the same things beautiful.”

Many AoPS students learn from one another at the same time as they learn from Rusczyk and his team. Olivia, a precocious twelve-year-old who lives in the rural town of La Grande, Oregon, was able to intuit basic algebra concepts by age eight; last July, she began her first online course with AoPS, in algebra. At the initial weekly class session, the teacher posted the first problem to the chat room, and Olivia, unaccustomed to the text-chat format, copied it down with a pencil. When she glanced back up, other pupils had already submitted their answers. Their speed stunned her. “You could see this panic,” her mother, Angela D’Antonio, recalled. But Olivia soon became a frequent visitor to the online message board to work with other students on hard “challenge problems.” (The other kids were situated in Toronto, India, and Singapore, among other places.) She quickly became one of the first to answer problems during class. Olivia has “just grown by leaps and bounds,” D’Antonio said, and not just in math; on the AoPS boards, Olivia—who is usually shy—has discovered friends with whom she can talk about Dungeons & Dragons and cryptography.

AoPS’s paid resources aren’t cheap. An online high-school-level course with a textbook can cost more than six hundred dollars; the elementary-school-level Beast Academy print books run about a hundred and twenty dollars per set, and a subscription to the accompanying online platform costs ninety-six dollars a year. For much of the past twenty years, U.S. public school systems have mainly focussed on raising the academic proficiency of the weakest students; the families of math overachievers were forced to turn, when they could, to private enrichment programs—from math circles and summer camps to AoPS and newer Web sites, such as Brilliant and Expii. Still, around seventy public school districts, from Albuquerque, New Mexico, to Mankato, Minnesota, now buy AoPS materials for their advanced elementary-school students—a move accelerated by the pandemic.

Meanwhile, since 2011, the nonprofit that Rusczyk founded, the Art of Problem Solving Initiative, has supported a residential summer camp program for mathematically talented middle-school kids from low-income and historically marginalized communities. The camp is now known as Bridge to Enter Advanced Mathematics ( BEAM ) Summer Away, and is held in New York and Los Angeles. Led by a math educator named Daniel Zaharopol, it has provided more than six hundred students with long-term mentoring and support. This year, BEAM is also giving selected fifth-graders at around ten partnering schools across the U.S. free access to AoPS curricula and other supporting resources. In a separate experiment led by AoPS, this fall more than three hundred bright, math-curious pupils from underserved areas of Atlanta, Detroit, San Juan, and elsewhere have been participating in live-streamed AoPS classes for free.

In mathematics, a concept known as the random walk describes a meandering path that is determined, at each step, by a random process, such as tossing a coin. Say you’re standing at a street corner on Fifth Avenue and you flip two coins. If it’s two heads, you walk one block north; if it’s one head and one tail, you walk one block east, and so on. At each intersection, you repeat the process. According to a century-old theorem by a Hungarian mathematician named George Pólya, if you keep up this sort of exercise, after many, many coin flips, the probability of winding up where you started approaches a hundred per cent.

Rusczyk learned about random-walk theory as a teen-ager at a math-tournament lecture; Lehoczky was there, too. Later, while visiting an amusement park, they began flipping coins to decide where to go or what to do. Should they climb over a fence or take the long way around? Have hot dogs or pizza for lunch? The game became a lifelong tradition. Once, coin-flipping their way around Manhattan, the two friends wound up at a Tibetan restaurant; they never would have chosen it, but it turned out to be good.

Our major life choices aren’t purely random, of course, but they can feel like leaps of faith. In some ways, random-walk theory seems like an apt metaphor for Rusczyk’s peregrinations into and away from math. “I got pulled back to the origin,” he said. Creating AoPS was a return to his math-competition roots.

And yet he doesn’t see himself, or his company, as teaching mathematics. Its mission is “to discover, inspire, and train the great problem solvers of the next generation”; its real impact, Rusczyk said, might be “revealing to the kids themselves how much they can do” at something they love. Rusczyk hopes to expand Beast Academy—which is currently used with gifted kids in grades 2-5—into a full K-6 curriculum that public elementary schools can adopt for regular math classes. It would be a further step toward democratizing advanced learning. He figures that some kids are unaware that they are potential math whizzes. He wants to help students “find themselves” earlier than seventh grade, when he found himself. He hopes that the curriculum might help guide more young brainiacs toward lives in math, or outside of it—in science, finance, or Silicon Valley.

One Sunday, I Zoomed with Rusczyk while he and Vanessa worked a morning shift in the AoPS warehouse. They’d woken up early and sipped coffee in their garden as dawn broke, then unloaded hay to feed their donkeys. Rusczyk had driven his dark gray Tesla to the AoPS office, where he’d done a quick sanitizing wipe-down of surfaces in the second-floor break room and bathrooms. He printed out book-order invoices in the finance office, then ran down to unlock the front door for Vanessa, who had driven separately. A petite brunette with frizzy tresses, she walked in wearing flip-flops, shorts, and an olive-green tank top; Rusczyk, who was dressed in green cargo shorts and a red T-shirt, looked serious and a bit tired. His days were crammed with e-mails and video and phone meetings—the workaday business of shepherding his expanding firm in the middle of a pandemic.

In the shipping room, he grabbed Beast Academy books, which were stocked on metal shelves, and laid them atop a growing tower of crisscrossed book orders on a red plastic cart. Each time the cart filled up, he transferred the books to an array of tables. It was work he actually enjoyed, he told me. The textbooks were a direct link to the enthusiastic math learners who would soon be engrossed in their pages.

“Feels like we’re doing something real,” Vanessa said, working at her own book cart.

“Yeah—doing something real,” Rusczyk said. A couple thousand books would be boxed and shipped the next day.

More Science and Technology

  • What happens when patients find out how good their doctors really are ?
  • Life in Silicon Valley during the dawn of the unicorns ?
  • The end of food .
  • The histories hidden in the periodic table .
  • The detectives who never forget a face .
  • What is the legacy of Laika, the first animal launched into orbit ?
  • Sign up for our daily newsletter to receive the best stories from The New Yorker .

The Press-on-Nail Renaissance

Mathematics Olympiads

Mathematics Olympiads are mathematics competitions . In some countries, mathematics Olympiads refer to all math competitions, while in some countries, including the United States, math Olympiads refer to proof-based math competitions.

The International Mathematics Olympiad (IMO) is the most famous math Olympiad. In USA a Math Olympiad would be United States of America Mathematical Olympiad that is famous(you'd have to take American Mathematics Competition 10 / American Mathematics Competition 12 and American Invitational Mathematics Examination and do well on it to qualify for it). Another in USA is USAMTS which if you do well can qualify for AIME . Both Olympiads are from USA & are proof based.

  • Mathematics competitions
  • Proof writing
  • Mathematics competition resources
  • Academic Olympiads

Something appears to not have loaded correctly.

Click to refresh .

art of problem solving olympiad geometry

art of problem solving olympiad geometry

Spring classes are open for enrollment !  

Unlock Your Student's Future Potential

Advanced Online Math and Language Arts Courses for Grades 2–12

Starting at $50/week

Since 1993, Art of Problem Solving has helped train the next generation of intellectual leaders. Hundreds of thousands of our students have gone on to attend prestigious universities, win global math competitions, and achieve success in highly competitive careers.

Why Students & Parents  AoPS Academy

"My son LOVES his class! The math is inspiring, the teacher has a wonderful sense of humor, and he got to meet other students who love math as much as he does. Amazing job! Thank goodness we found AoPS!" — Excited Dad

"The staff at AoPS Academy do a great job helping students reach their fullest potential. My children have learned to think logically and creatively through solving tough problems. I'm glad we found this valuable resource!" — Usha D, Parent

Year-Round Math

Grades 2–12

Using our world-renowned AoPS and Beast Academy textbook series, each level provides a full year math curriculum that dives deep into all core topics

Honors Math 2

Honors math 3, honors math 4, honors math 5, honors math 6: prealgebra, contest math 6: prealgebra, honors math 7: introduction to algebra, contest math 7: introduction to algebra, honors math 7.5: counting, probability, and number theory, honors math 8: introduction to geometry, contest math 8: introduction to geometry, honors math 9: intermediate algebra, honors math 10: precalculus, honors calculus, high school contest math, year-round language arts.

Engaging in this full curriculum designed for motivated learners, students will learn the skills required to succeed in high school and beyond by building a strong foundation in all aspects of language arts

Honors Language Arts 3

Honors language arts 4, honors language arts 5, honors language arts 6: foundations in middle school language arts, honors language arts 7: persuasive writing and rhetoric, honors language arts 8: research, presentation, and public speaking, summer math.

Grades 3–12

Preparing students for the next school year and math contests like AMC and MATHCOUNTS through enriched study and exploration beyond the standard curriculum

Math Beasts Camp 3

Math beasts camp 4, math beasts camp 5, math beasts camp 6 (prealgebra prep), math beasts camp 7-8 (algebra prep), math beasts camp 8-9 (geometry prep), middle school math contests: number theory and geometry, middle school math contests: algebra and counting, amc 10/12 prep camp, summer language arts.

Grades 3–10

Students put their creative and analytical skills to the test with Language Arts enrichment camps designed to enhance reading, writing, and persuasive speaking skills

Creative Writing

Writing for the spotlight, language arts triathlon, rhyme-bot derby: language craft and analysis.

Myth Quest: Creative Writing and Archaeology

Myth Quest: Creative Writing and Archaeology

Mock trial: persuasive speaking, academic essay writing, go above and beyond with aops.

student solving tough math problems

Solve tough problems through immersive active-learning experiences

Collaborate with exceptional classmates from across the country.

Student on laptop collaborating with classmates

Participate in lively, interactive activities through a video conferencing format

happy student and parent doing classwork

Ready to start solving problems?

art of problem solving olympiad geometry

IMAGES

  1. From How to Solve It to Problem Solving in Geometry (II) Olympiad

    art of problem solving olympiad geometry

  2. solving-problems-in-geometry-insights-and-strategies-for-mathematical

    art of problem solving olympiad geometry

  3. Olympiad Geometry Problem #32: IMO 2019 #6

    art of problem solving olympiad geometry

  4. Math Olympiad । Problem Solving । Euclidean Geometry। BdMO

    art of problem solving olympiad geometry

  5. Art of Problem Solving Introduction to Geometry

    art of problem solving olympiad geometry

  6. Buy Introduction To Geometry, 2nd Edition (The Art Of Problem Solving

    art of problem solving olympiad geometry

VIDEO

  1. Math Olympiad

  2. Art of Problem Solving: Brain-Teasing Math Puzzles

  3. Math Olympiad Geometry Question

  4. Art of Problem Solving: Probability and Combinations Part 2

  5. A Very Nice Math Olympiad Geometry Problem

  6. A Difficult Geometry Problem

COMMENTS

  1. Geometry/Olympiad

    An olympiad-level study of geometry involves familiarity with intermediate topics to a high level, a multitude of new topics, and a highly developed proof-writing ability. ... Art of Problem Solving is an ACS WASC Accredited School. aops programs. AoPS Online. Beast Academy. AoPS Academy. About. About AoPS. Our Team. Our History. Jobs. AoPS Blog.

  2. Category:Olympiad Geometry Problems

    Pages in category "Olympiad Geometry Problems" The following 185 pages are in this category, out of 185 total. 1. 1959 IMO Problems/Problem 4; ... Art of Problem Solving is an ACS WASC Accredited School. aops programs. AoPS Online. Beast Academy. AoPS Academy. About. About AoPS. Our Team. Our History. Jobs. AoPS Blog. Site Info.

  3. IMO Problems and Solutions

    1986 IMO (in Poland) Problem 1 proposed by Arthur Engel, West Germany. Problem 2 proposed by Gengzhe Chang and Dongxu Qi, China. Problem 3 proposed by Elias Wegert, East Germany. Problem 4 proposed by Sven Sigurðsson, Iceland. Problem 5 proposed by David Monk, United Kingdom.

  4. Problems

    This page contains problems and solutions to the International Math Olympiad and several USA contests, and a few others. ... compiled a 336-problem index of recent problems by subject and MOHS rating. In addition, the linked file also contains a hyperlink to each of the corresponding solution threads on Art of Problem-Solving. This document ...

  5. Evan Chen • For beginners

    General: Art and Craft of Problem-Solving by Paul Zeitz is a good "first book" for all the fields. Geometry: My book E.G.M.O.; and more alternatives are linked at the bottom of that page. Number theory: Modern Olympiad Number Theory is the most comprehensive text I know of now. The OTIS Excerpts has beginner introductions for several topics:

  6. Art of Problem Solving Introduction to Geometry

    In addition to the instructional material, the book contains over 900 problems. The solutions manual contains full solutions to all of the problems, not just answers. This book can serve as a complete geometry course, and is ideal for students who have mastered basic algebra, such as solving linear equations.

  7. Geometry/Resources

    The Geometry Junkyard; AoPS-ML Olympiad Geometry Forum; Classes Introductory. The Introduction to Geometry class is where you can learn introductory geometry. Olympiad. The Worldwide Online Olympiad Training Program is designed to help students learn to tackle mathematical Olympiad problems in topics such as geometry. Software. These are all ...

  8. Solving olympiad geometry without human demonstrations

    As reported in Extended Data Fig. 6, we find that, using only 20% of the training data, AlphaGeometry still achieves state-of-the-art results with 21 problems solved. Similarly, using less than 2% ...

  9. International Mathematical Olympiad

    Scoring. Scoring on each problem is done on a 0-7 scale (inclusive and integers only). Full credit is only given for complete, correct solutions. Each solution is intended to be in the form of a mathematical proof. Since there are 6 problems, a perfect score is 42 points.

  10. Course Catalog

    Art of Problem Solving AoPS Online. Math texts, online classes, and more for students in grades 5-12. ... We use puzzles and games to build skills and establish student understanding of mathematical concepts and problem solving. ... including 15 USAMO winners in 2020, as well as the US team members that won the International Math Olympiad (IMO ...

  11. Honors Math 8: Intro to Geometry Course

    Honors Math 8 covers a full year of Geometry for advanced eighth graders or high schoolers (ages 13-16). Live instructors engage students in an introduction to geometry by teaching problem-solving techniques that help them succeed beyond the classroom. View the course syllabus for full class information and a list of topics. Year-Round. 36 weeks.

  12. Art of Problem Solving

    The AoPS Online School is an online school that hosts classes for outstanding middle and high school students. The school is also accredited by the Western Association of Schools and Colleges. Each of the classes offered focus heavily on developing deep understanding of the methods of mathematical problem solving. About the AoPS Online School.

  13. Art of Problem Solving

    The Art of Problem Solving hosts this AoPSWiki as well as many other online resources for students interested in mathematics competitions. Look around the AoPSWiki. Individual articles often have sample problems and solutions for many levels of problem solvers. Many also have links to books, websites, and other resources relevant to the topic.

  14. Richard Rusczyk's Worldwide Math Camp

    Online, a math Olympian has found a way to nurture prodigies from around the world. At the start of a YouTube video titled " Art of Problem Solving: Least Common Multiple," Richard Rusczyk ...

  15. Mathematics Olympiads

    Mathematics Olympiads are mathematics competitions. In some countries, mathematics Olympiads refer to all math competitions, while in some countries, including the United States, math Olympiads refer to proof-based math competitions. The International Mathematics Olympiad (IMO) is the most famous math Olympiad. In USA a Math Olympiad would be ...

  16. AoPS Academy Virtual Campus

    Starting at $50/week. Enroll Today. As seen in. Since 1993, Art of Problem Solving has helped train the next generation of intellectual leaders. Hundreds of thousands of our students have gone on to attend prestigious universities, win global math competitions, and achieve success in highly competitive careers.

  17. AMC Problems and Solutions

    For over 15 years, our Online School has been the cornerstone of contest training for many winners of AMC contests. Nearly all of the US International Math Olympiad team members of the last decade are AoPS alumni. Check out our schedule of upcoming classes to find a class that's right for you! CHECK SCHEDULE.

  18. School

    Art of Problem Solving textbooks have been used by outstanding students since 1993. The AoPS website launched in 2003, and its online community now has over one million users. Many of the winners of each year's International Math Olympiad use the AoPS site as a primary training resource. The AoPS Online school has over 20,000 enrollments ...

  19. Art of Problem Solving

    Art of Problem Solving trains students to approach new challenges by breaking problems down into familiar parts. VIEW COURSES. ... In the last 10 years, 59 USA International Math Olympiad team members have medaled and have taken over 360 AoPS Online courses. 46 12 1

  20. Math Message Boards FAQ & Community Help

    Art of Problem Solving is an ACS WASC Accredited School. aops programs. AoPS Online. Beast Academy ... Small live classes for advanced math ... Register. online school Class Schedule Recommendations Olympiad Courses Free Sessions/Math Jams Video Classes books tore . Middle/High School Elementary School Online Books Recommendations Other Books ...

  21. Math Curriculum Recommendations

    Art of Problem Solving materials have been a key component in the success of many winners of major national and international math competitions. We integrate challenging problems from contests such as MATHCOUNTS, AMC 8/10/12, AIME and the USA(J)MO throughout our books. Moreover, we offer books compiled by the organizers of various major contests.