• Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write Top-Graded Essays in English

How to Write Top-Graded Essays in English

  • 5-minute read
  • 7th December 2022

Writing English papers and essays can be challenging at first, but with the right tools, knowledge, and resources, you can improve your writing skills. In this article, you’ll get some tips and tricks on how to write a top-graded essay in English.

Have you heard the saying “practice makes perfect”? Well, it’s wrong. Practice does make improvement, though. Whether you’re taking an English composition class, studying for the IELTS or TOEFL , or preparing to study abroad, you can always find new ways to practice writing in English.

If you practice on a daily basis, you’ll be exercising the skills you know while challenging yourself to learn even more. There are many ways you can practice writing in English daily:

  • Keep a daily journal.
  • Write practice essays.
  • Do creative writing exercises .

Read in English

The best way to improve your writing is to read English books, news articles, essays, and other media. By reading the writing of other authors (whether they’re native or non-native speakers), you’re exposing yourself to different writing styles and learning new vocabulary. Be sure to take notes when you’re reading so you can write down things you don’t know (e.g., new words or phrases) or sentences or phrases you like.

For example, maybe you need to write a paper related to climate change. By reading news articles or research papers on this topic, you can learn relevant vocabulary and knowledge you can use in your essay.

FluentU has a great article with a list of 20 classic books you can read in English for free.

Immerse Yourself in English

If you don’t live in an English-speaking country, you may be thinking, “How can I immerse myself in English?” There are many ways to overcome this challenge. The following strategies are especially useful if you plan to study or travel abroad:

  • Follow YouTube channels that focus on learning English or that have English speakers.
  • Use social media to follow English-speaking accounts you are interested in.
  • Watch movies and TV shows in English or use English subtitles when watching your favorite shows.
  • Participate in your English club or salon at school to get more practice.
  • Become an English tutor at a local school (teaching others is the best way to learn).

By constantly exposing yourself to English, you will improve your writing and speaking skills.

Visit Your Writing Center

If you’re enrolled at a university, you most likely have a free writing center you can use if you need help with your assignments. If you don’t have a writing center, ask your teacher for help and for information on local resources.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

Use Your Feedback

After you submit an English writing assignment, you should receive feedback from your teacher on how you did. Use this feedback to your advantage. If you haven’t been getting feedback on your writing, ask your teacher to explain what issues they are seeing in your writing and what you could do to improve.

Be Aware of Your Common Writing Mistakes

If you review your feedback on writing assignments, you might notice some recurring mistakes you are making. Make a list of common mistakes you tend to make when writing, and use it when doing future assignments. Some common mistakes include the following:

  • Grammar errors (e.g., not using articles).
  • Incorrect vocabulary (e.g., confusing however and therefore ).
  • Spelling mistakes (e.g., writing form when you mean from ).
  • Missing essay components (e.g., not using a thesis statement in your introduction).
  • Not using examples in your body paragraphs.
  • Not writing an effective conclusion .

This is just a general list of writing mistakes, some of which you may make. But be sure to go through your writing feedback or talk with your teacher to make a list of your most common mistakes.

Use a Prewriting Strategy

So many students sit down to write an essay without a plan. They just start writing whatever comes to their mind. However, to write a top-graded essay in English, you must plan and brainstorm before you begin to write. Here are some strategies you can use during the prewriting stage:

  • Freewriting
  • Concept Mapping

For more detailed information on each of these processes, read “5 Useful Prewriting Strategies.”

Follow the Writing Process

All writers should follow a writing process. However, the writing process can vary depending on what you’re writing. For example, the process for a Ph.D. thesis is going to look different to that of a news article. Regardless, there are some basic steps that all writers should follow:

  • Understanding the assignment, essay question, or writing topic.
  • Planning, outlining, and prewriting.
  • Writing a thesis statement.
  • Writing your essay.
  • Revising and editing.

Writing essays, theses, news articles, or papers in English can be challenging. They take a lot of work, practice, and persistence. However, with these tips, you will be on your way to writing top-graded English essays.

If you need more help with your English writing, the experts at Proofed will proofread your first 500 words for free!

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

How to Ace Slack Messaging for Contractors and Freelancers

Effective professional communication is an important skill for contractors and freelancers navigating remote work environments....

3-minute read

How to Insert a Text Box in a Google Doc

Google Docs is a powerful collaborative tool, and mastering its features can significantly enhance your...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • What can IELTS do for you
  • Ways to take IELTS
  • Who accepts IELTS?
  • Sample test questions
  • IELTS Trial Test
  • Understanding your score
  • Trust IELTS
  • On test day
  • Test centres
  • IELTS One Skill Retake
  • Cancellations, refunds...
  • Access arrangements
  • Getting and sharing...
  • Improving your results
  • Academic Institutions
  • Why accept IELTS?
  • IELTS Scoring
  • Compare IELTS
  • IELTS for your sector
  • Get started with IELTS
  • Verifying IELTS results
  • Research reports
  • Test statistics​
  • Research funding
  • Awards and scholarships
  • Previously funded...
  • News and Insights

Need help finding something? Enter a search term below

IELTS I've got this logo

10 steps to writing high-scoring IELTS essays

Date Published

01 February 2023

  • This article was first published on IELTS.IDP.com

Whether you take the General Training or Academic IELTS test, the second writing task is writing an essay in response to a problem or argument. Here are 10 easy steps, with lots of tips, to guide you on how to write high-scoring essays.

How is the IELTS essay component marked?

Fairness and accuracy are critically important when marking IELTS writing tasks . Your essay will be marked by at least two experienced IELTS examiners on the following criteria:

  • Task response - Whether you answered the question fully and supported your answer well.
  • Coherence and cohesion - How well you linked your ideas together.
  • Lexical resource - Whether you used a wide range of vocabulary correctly and appropriately.
  • Grammatical range and accuracy - How many grammatical structures you used accurately and appropriately.

Each of these criteria is worth 25 percent of your total score for the essay writing task. Both of your writing tasks are used to calculate your overall writing band score.

How to write high-scoring essays in 10 easy steps

Step one: plan your time.

The Writing test (consisting of Writing tasks 1 and 2) takes approximately 60 minutes. Plan to spend around 20 minutes on your first task, and 40 minutes on your essay task. A sample plan for your time might be:

  • 5 to 10 minutes reading the essay question and planning your answer
  • 15 to 20 minutes writing your first draft
  • 10 minutes proofreading and editing your essay

How to write a good introduction

Step two: Read the question

While you may be anxious to jump straight into writing, make sure you take the time to carefully read the essay question. If you misunderstand the question, you risk writing an essay that does not address the issues properly which will lower your score.

Top 10 podcasts to help you improve your English

Step three: Highlight the issues to address

There will be multiple issues that you will need to address in your essay. Addressing each issue individually is key to achieving a high essay score. Highlight each individual issue that you will need to address.

The A to Z of IELTS: E is for Essays

Step four: Outline your response

Create an outline of how you will respond to the issues in your essay. This will serve as your ‘blueprint’ when you write your first draft. As a general rule your essay should have:

  • An introduction stating what you will talk about
  • Two or three body paragraphs , each addressing one issue or idea
  • A conclusion summing up what was discussed in the essay

Make sure you note which idea or issue you will address in each paragraph. Check that the issues you highlighted are all accounted for in your outline.

Step five: Expand on your ideas

Write some notes about any key points or ideas you’d like to include in each paragraph. When you’re writing your first draft, these notes will help to make sure you don’t forget any ideas you want to include.

Mind maps to build your vocabulary resource for IELTS

Step six: Plan how you will connect your ideas

Connecting your ideas clearly and correctly is critical to achieving a high essay score. Try to use a range of linking words to make your essay easy to read. You can use connecting devices and phrases to:

List connected ideas

  • ‘Firstly, secondly, thirdly’
  • ‘Furthermore’

Provide more information

Compare ideas.

  • ‘On the other hand’
  • ‘Alternatively’

Don’t fall into the trap of trying to put a linking word in every sentence. Essays will score higher when the writer uses linking words only where necessary and appropriate.

Step seven: Write your first draft

Now that you’ve planned your essay, it’s time to write your first draft. Follow the outline you’ve created and expand on the notes and ideas you included there.

  • Avoid informal language unless it is appropriate.
  • Avoid spelling and grammatical errors where possible.
  • Use a mix of sentence structures such as simple sentences, complex sentences and compound sentences.

How to boost your IELTS Writing score

Step eight: Proofread your essay

When you have completed the first draft of your essay, it’s important to proofread it. Read your essay from start to finish.

You can read it silently, but it may help to read it out loud if you can do so without disturbing others. Make a mental note or mark your paper anywhere that you may need to fix an issue.

How to access FREE official IELTS mock tests

Step nine: Edit your essay

Carefully go through the issues you noted while proofreading. Edit or rewrite these until they look and sound correct. Examples of issues and how to edit them may include:

  • The sentence is too long. A sentence is probably too long if you need to take a breath in the middle of reading it. Try splitting this up into smaller sentences.
  • A sentence sounds strange when you read it out loud. Try using different words or punctuation until it sounds right. It may need to be connected to another sentence.
  • The same word appears many times. Think about any other words you could use instead.

There is more than one main idea in each paragraph. Move any unrelated sentences to the correct paragraph. Each paragraph should address one issue only.

IELTS Writing: How to organise your responses

Step 10: Proofread your essay again

After your edits and before submitting your essay , give it one final proofread. Make sure you have:

  • Included all the points you highlighted in step three
  • Followed your outline from step four
  • Used good connecting words from step six
  • Fixed any errors or issues in step nine

IELTS Writing: 8 steps towards a band 8

Why choose IELTS?

IELTS is widely recognised by businesses and universities globally, and is the only English language competency test approved by all four of the following countries:

  • New Zealand

With convenient computer and paper based test options, your IELTS test can be completed in a way that you’re most comfortable with. If you’re in a hurry, you could even have your test results back within two to five days!

Also, the IELTS Familiarisation test is designed to give test takers an idea of what to expect on the actual IELTS test. It includes sample questions from different part of the test, such as Listening, Reading, and Writing. Set yourself up for success and explore our extensive library of preparation materials today.

  • Accessibility
  • Legal & policies

2024. IELTS is jointly owned by the British Council; IDP IELTS; and Cambridge University Press & Assessment

IELTS NINJA

Press ESC to close

Writing high scoring ielts essays

Writing High-Scoring IELTS Essays: A Step-by-Step Guide

Writing great IELTS essays is essential for success. This guide will give you the tools to craft high-scoring essays. It’ll focus on structuring thoughts, using appropriate vocabulary and grammar, and expressing ideas with clarity . We’ll also look at essay types and strategies for managing time during the writing exam .

Practice is key . Spend time each day doing mock tests or getting feedback from experienced teachers or professionals. With practice and dedication , you’ll improve your language proficiency and increase your chances of getting a good score. Good luck!

Understanding the IELTS Essay Task

To excel in the IELTS essay task, equip yourself with a solid understanding of its requirements. Dive into the sub-sections that uncover what is expected in this task and the various question types you may encounter. Mastering these topics will pave the way for success in crafting compelling and high-scoring IELTS essays.

What is expected in the IELTS essay task

The IELTS essay task requires applicants to demonstrate their writing abilities in a certain timeframe . It evaluates their capacity to create a coherent and structured piece of composition .

A clear thesis is a must. It should be succinct, conveying the primary thought of the essay . Also, there should be a logical structure including an introduction, body paragraphs, and conclusion. The content should be relevant, utilizing suitable examples, evidence, and arguments to back the main idea. Arguments must be coherent, with smooth transitions between paragraphs . Plus, formal language, correct grammar, and accurate syntax must be used.

Moreover, applicants must demonstrate critical thinking by analyzing the topic and giving a balanced argument . Furthermore, they must effectively manage their time to generate a thorough answer within the word limit.

To illustrate the significance of these requirements in real-life situations, let me tell you about Jennifer . She was an aspiring nurse from Brazil taking the IELTS test . At first, she found it hard to handle the essay task. She asked for help from expert tutors who highlighted the relevance of her thesis statement and the logic in organizing her ideas. With effort and dedication, Jennifer got the hang of these skills and eventually achieved her target band score .

The types of questions asked in the IELTS essay task

The IELTS essay task covers multiple types of questions. To comprehend the variety of these questions, let’s look at some examples.

To do well, you need to prepare and practice for each type. Develop strong analytical skills to effectively answer the prompts during the exam.

Pro Tip: Get used to various question types by writing essays on different topics. This will help you adjust and boost your performance.

Descriptive questions

It’s essential to comprehend the IELTS Essay Task. This section focuses on descriptive questions . To illustrate this info effectively, use a table with suitable columns. Unique details enhance our understanding. To sharpen essay writing abilities, certain tips are useful. For instance, practice time management and create a clear structure . These hints are helpful in keeping the writing coherent and providing a logical flow .

Also Read: 10 Must-Follow IELTS Reading Tips and Tricks to Boost Your Band Score

Argumentative questions

Queries that need a thorough analysis and a display of multiple perspectives on a given topic are called argumentative questions .

They come in different types, such as:

  • Cause and Effect (e.g. What are the consequences of using social media?)
  • Pros and Cons (e.g. Should zoos be forbidden?)
  • Agree or Disagree (e.g. Is homework essential for students?).

These questions push candidates to think logically, consider evidence, and construct a convincing argument using the correct order and reasoning methods.

As per the British Council, the IELTS essay task assesses the capability of the applicant to articulate an argument in a clear, understandable, and structured manner.

Advantages and disadvantages questions

Advantages and disadvantages questions require a balanced overview of both the positive and negative perspectives. Here is a summary of these questions:

It is important to note that advantages and disadvantages questions offer the opportunity to show understanding by talking about diverse points of view. Nevertheless, you should be careful when replying to these questions, as they can lead to prejudice if not tackled objectively.

Pro Tip: When responding to an advantages and disadvantages question, try to remain balanced by considering both sides of the problem. This will help you create an in-depth reply.

Problem and solution questions

Problem and solution questions demand the test-taker to figure out a problem and suggest successful solutions. Here are 6 tips to help you excel in this IELTS essay type:

  • Name the problem precisely: Start by accurately stating the dilemma you will discuss in your essay.
  • Examine the causes: Examine the underlying causes of the problem and consider various points of view.
  • Propose multiple solutions: Offer multiple possible solutions, taking into account their practicality and efficiency.
  • Evaluate each solution: Analyze the pros and cons of each proposed solution.
  • Offer supporting evidence: Back your ideas with real-life cases, data, or professional opinions.
  • Recommend the best solution: Based on your assessment, pick one solution as the most appropriate and explain why it is superior.

Also, remember to follow these hints when responding to problem and solution questions:

  • Think about short-term and long-term effects of applying each solution.
  • Prioritize realistic and feasible solutions over idealistic ones.
  • Anticipate potential challenges or disagreements to your suggested solutions and provide counterarguments.

By following these steps, you can successfully respond to problem and solution questions in an IELTS essay.

Analyzing the Essay Question

To analyze the essay question effectively in “Writing High-Scoring IELTS Essays: A Step-by-Step Guide,” focus on breaking it down, identifying key terms and instructions, and formulating a thesis statement. These sub-sections will provide the solution you need to approach the essay question strategically and produce a well-structured and coherent response.

Breaking down the essay question

Let’s break down an essay question with a table. The table has elements, description, topic, scope, task, and subtasks .

We can use this table to plan and structure our response. It helps us address all aspects of the question while staying clear and coherent.

Here are some tips for breaking down an essay question:

  • Read and understand it. Look for keywords that give clues.
  • Identify the main topic.
  • Find out the scope.
  • Analyze the task.
  • Break down subtasks.

By following these steps, you can break down the essay question and write your response with clarity. Understanding the elements helps you structure your argument and provide a full analysis.

Identifying key terms and instructions

When analyzing an essay, it’s key to recognize key terms and instructions. This allows us to know what is being asked and how to approach the topic. We can do this by:

  • Reading the question thoroughly.
  • Looking for important words.
  • Finding out the meanings of any unfamiliar terms.
  • Understanding the instructions.
  • Noting limitations or qualifiers.
  • Setting boundaries for what should be included or excluded.

Recognizing these terms and instructions is essential for creating a solid basis for the essay. Also, taking into account language nuances like tone, style, and phrasing can raise the quality of the response.

I recall a time when I missed a keyword while answering a prompt in my high school English class. Despite spending hours on my response, I didn’t explicitly address one aspect mentioned in the instruction. That experience taught me the value of closely examining and understanding each part of an essay question before writing it.

Formulating a thesis statement

Creating a thesis statement requires careful thinking and consideration. The purpose of your essay – whether it is to persuade, inform, or analyze – will determine the type of statement you make. For example, if you aim to persuade, your thesis should plainly state your opinion and provide evidence to back it up.

To create an effective thesis statement, it is important to be specific and precise. Avoid making foggy or wide statements that are unclear. Instead, focus on making an exact statement or argument. This will help guide your essay and give it a clear purpose.

When forming your thesis statement, consider counterarguments. Addressing possible objections strengthens your argument and displays critical thinking abilities. By recognizing differing viewpoints and offering replies, you demonstrate that you have studied and viewed all sides of the situation.

In addition, a great thesis statement should be debatable. It should start a conversation and attract the reader. Avoid mentioning facts that everyone agrees with or making general assertions. Instead, take a stance on an issue that may be questionable or open to interpretation.

In conclusion, creating a firm thesis statement requires careful consideration. Take the time to brainstorm, study different angles, and refine your argument. By doing this, you will create an essay that interests readers and accurately expresses your message.

Planning and Organizing the Essay

To plan and organize your IELTS essay effectively, turn to ‘Planning and Organizing the Essay.’ Create an outline, brain dump ideas, and arrange them logically. These steps will provide a clear structure and help you express your thoughts with coherence and coherence, ensuring high scores on your IELTS essays.

Creating an outline

Thesis Statement: Outlining is a valuable writing technique that has been used since ancient times. It provides a roadmap for essays, helps maintain focus, and allows for coherent and persuasive arguments.

Paragraph 1:

  • Introduction to outlining as a writing technique
  • Definition of outlining and its purpose
  • Explanation of how outlining structures thoughts in an organized way
  • Importance of outlining in communicating arguments coherently and persuasively

Paragraph 2:

  • Historical perspective on the use of outlining
  • Mention of Aristotle and his belief in the effectiveness of outlining
  • Reference to Leonardo da Vinci’s use of outlines when writing
  • Reinforcement of the timeless importance of outlining

Paragraph 3:

  • Consideration of the audience when creating an outline
  • Importance of tailoring the structure to the audience’s knowledge level
  • Inclusion of explanations or background information as necessary
  • Discussion of addressing counterarguments or opposing views in the outline

Conclusion:

  • Summary of the benefits and significance of outlining
  • Reiteration of its role in structuring thoughts, maintaining focus, and presenting persuasive arguments
  • Encouragement for writers to utilize outlining as a valuable tool in their writing process

brain dumping ideas

Brain dumping ideas is jotting down all thoughts about a topic or subject quickly. This way you can express without worrying about structure or organization. To make the most of this technique, consider these four points:

  • Dedicate time and space to brainstorming. Find a quiet environment with no distractions.
  • Grab pen and paper or open a blank document. Write any ideas that come to mind, even small ones.
  • Review what you have written. Look for patterns and connections.
  • Organize your thoughts into categories or themes.

Remember, brain dumping is not a final product. It’s a tool for creativity. Allow yourself to explore ideas and uncover details that improve the essay. Here are more suggestions:

  • Go beyond the obvious ideas. Think outside the box.
  • Use mind mapping and visual aids to represent thoughts.
  • Discuss ideas with peers or mentors.
  • Take breaks if you feel overwhelmed.

Arranging ideas logically

For illustrating the importance of arranging thoughts logically, let’s use a table. It demonstrates multiple organizational patterns:

Now let’s discuss extra details. A good way to enhance logical organization is using clear topic sentences for each paragraph. These sentences act as signposts. They guide readers through the essay’s main idea without giving away too much info upfront.

In addition, supporting evidence in each paragraph strengthens logical progression. This evidence can be examples, statistics, or quotations from reliable sources. These substantiate your statements.

Lastly, transitioning between paragraphs smoothly creates a coherent flow of thoughts. Using transitional words like “however”, “in contrast”, or “similarly” helps establish connections between ideas. This avoids abrupt changes of topics.

Writing the Introduction

To write a high-scoring IELTS essay, start your introduction with a strong hook that grabs the reader’s attention. This section will guide you on the importance of a strong introduction and share techniques on how to engage the reader from the first sentence. Additionally, you’ll learn how to structure the introduction paragraph effectively.

The importance of a strong introduction

Writing a strong introduction is essential. It sets the tone for an article and draws readers in. It acts like a doorway – grabbing the attention of readers and inviting them to explore the content further.

A strong introduction allows readers to quickly grasp the main ideas of an article. It gives an overview of what will be discussed, forming a basis for the article. Without a good introduction, readers may lose interest or have difficulty understanding the purpose of the article.

Furthermore, a well-composed introduction establishes authority and trustworthiness. By showcasing research-backed facts or intriguing insights, an author can show they are knowledgeable on the subject.

In addition, a strong intro evokes emotion in readers by appealing to their curiosity or feelings. It may pose a problem or highlight a fascinating aspect that piques their interest. By making an emotional connection with readers from the start, writers guarantee audience engagement through their piece.

Now let’s look at some unique details about introductions. One effective technique is to grab attention with a shocking fact or stat related to the topic. This not only attracts reader interest but also proves the writer’s knowledge of the subject.

Another technique is to use storytelling elements in introductions. Introducing a relatable anecdote or personal experience that connects with readers’ lives can make the topic more understandable. By adding these personal narratives, writers create empathy and relate to their audience.

Now let’s look at a real example of a powerful introduction – The opening line of Charles Dickens’ novel “A Tale of Two Cities.” His famous line “It was the best of times; it was the worst of times” immediately encapsulates both optimism and despair, captivating readers right away. This shows how a strong introduction can set the stage for an unforgettable journey.

Remember, a powerful introduction can make or break an article. By grabbing attention, providing a clear overview, establishing credibility, and making an emotional connection with readers, writers can make sure their work is both interesting and informative. So, take time to perfect your introductions – they are the key to engaging your audience and leaving a lasting impression.

How to grab the reader’s attention

  • Start with an intriguing fact or a thought-provoking question. This will get the reader’s attention.
  • Introduce the topic and show why it’s important. Keep it concise and focused.
  • State your main point or argument. Give the reader a roadmap.

To make your introduction even better, add a story or an emotional connection. This will create an instant bond and keep them hooked.

Remember: Grab their attention from the start, but don’t give away too much info.

Pro Tip: Get feedback on your intro before finalizing it. Revise it as needed.

Structuring the introduction paragraph

Engage your reader with an interesting story or statistic. Then, outline your main points concisely and without jargon. Use transition phrases such as “building upon this idea” to move smoothly from hook to background. Finish off with a clear thesis statement. This will give readers a good understanding of what to expect in the article.

Developing Body Paragraphs

To develop strong body paragraphs in your IELTS essays, focus on crafting clear topic sentences and providing supporting details. Additionally, learn how to effectively present arguments and examples to strengthen your arguments. Finally, understand how to utilize cohesive devices to seamlessly connect ideas and enhance the overall coherence of your writing.

Topic sentences and supporting details

Topic Sentences and Supporting Details

Topic sentences provide the main idea of a paragraph. To explain them, it’s important to include relevant details.

To illustrate this concept, let’s look at a table. It shows how topic sentences and supporting details work together.

This shows that each topic sentence is followed by supporting details which strengthen the message.

Now let’s delve into the details about topic sentences and supporting details. They should be presented in a logical order. The details should also be relevant and specific to the main idea. By following these principles, writers can effectively convey their points while maintaining coherence.

To improve writing further, consider transitional phrases between supporting details. Also, acknowledge counterarguments within the paragraphs. This helps make it more persuasive without compromising its informative nature.

Providing arguments and examples

Let’s explore how to give strong arguments and examples. Imagine a neat table with data that supports our view. There should be columns of factual numbers to back up the argument. This visual aid is a great way to convince readers.

We also need to include details that haven’t been discussed before. These details add more to our writing, so it looks professional. By looking into lesser-known aspects, we can make our arguments better.

So why wait? Using evidence in your writing will make readers emotional. It will also make them feel like they have to join your perspective. Don’t be scared to use persuasive body paragraphs. Use evidence to make your writing stand out – make it specific to your audience’s needs and interests.

Using cohesive devices to link ideas

Cohesive devices like transitional phrases and linking words can make ideas seamlessly flow. This gives the reader a better understanding of the writer’s thoughts.

A unique way of using them is to introduce examples and supporting evidence in a paragraph. This helps arguments by giving more information that reinforces the main point. “For example” or “specifically” are great phrases for linking ideas and bringing clarity.

Pro Tip: Pick the right word or phrase for the intended meaning. Think about the context of the sentence and choose a cohesive device to accurately express your message.

Crafting the Conclusion

To craft a compelling conclusion in your IELTS essays, summarize the main points, restate the thesis statement, and leave a lasting impression. Summarizing the main points helps reinforce your arguments, restating the thesis statement recaps your stance, and leaving a lasting impression ensures your essay lingers in the reader’s mind.

Summarizing the main points

Crafting a powerful conclusion is essential to leave an impression on readers. Here’s how:

  • Highlight each point’s importance & impact.
  • Show their connection to form a cohesive narrative.
  • Explain how they contribute to the overall message.
  • End with a call to action or thought-provoking final remark.

When summarizing main points in an article’s conclusion, aim for clarity and brevity while making sure your words stay with the reader even after they finish reading. Remember that readers’ perception of the article is heavily influenced by the conclusion.

Restating the thesis statement

Have you ever wanted to live a crazier life ? Let’s give it a try! Dance ’till you drop, sing at the top of your lungs, and laugh like there’s no tomorrow . Let loose and have some fun! It’ll be an adventure you won’t soon forget.

Have you ever dreamed of living a wilder life ? Let’s do it! Dance ’til you can’t move, belt out your favorite songs, and laugh with joy . Go for it and have a blast! This will be an adventure you won’t forget anytime soon.

Leaving a lasting impression

It is key to craft a lasting impression. Get to the point, use strong words and visuals. End with a call-to-action.

Customize your message to cater to the needs of your audience. Speak with the right tone and style for engagement.

Winston Churchill is a prime example of leaving a lasting impression. His speeches during World War II inspired nations. Even after his death, his words still have an impact.

To leave a lasting impression, be concise. Employ impactful words. Use visual aids. And make a call-to-action. Understand your audience. Draw inspiration from those who have come before. You can make your mark in communication.

Proofreading and Editing

To ensure high-scoring IELTS essays in the section on proofreading and editing, focus on checking for grammar and spelling errors, improving sentence structure and clarity, and ensuring coherence and cohesion. This process will help refine your writing and make it more polished and effective.

Checking for grammar and spelling errors

Proofreading and editing are essential. Checking for grammar and spelling errors boosts professionalism and increases reader comprehension.

Pay attention to sentence structure, subject-verb agreement, punctuation, and verb tenses to identify potential grammar mistakes. Check for run-on sentences and fragments.

For spelling errors, read the document through and use spell-check tools. But, they may not detect homophones or typos.

A great technique is to read the text aloud. It can help spot awkward phrasing and spelling mistakes. It’s a good idea to get another set of eyes to review the work too.

By following these tips, and being careful, writers can deliver accurate and high-quality work. Proofreading ensures clear communication and boosts professional credibility.

Improving sentence structure and clarity

To better your sentence structure & clarity, follow these 6 steps!

  • Start with a topic sentence – clearly state the main idea.
  • Use active voice instead of passive for concise writing.
  • Keep sentences short & simple.
  • Use transitions to connect ideas.
  • Cut out wordiness.
  • Revise & proofread.

Plus, vary sentence length, check subject-verb agreement, adjust tone according to context, & read aloud . Practicing these tips will help you improve your sentences.

In 1928, Virginia Woolf wrote “Orlando,” a modernist masterpiece. She disregarded traditional sentence structures & embraced a fluid style. Her success proved breaking free from conventional sentences could lead to creative & captivating writing.

Ensuring coherence and cohesion

Key aspects for ensuring coherence and cohesion:

  • Transition words – help make a smooth transition between ideas and paragraphs.
  • Pronouns – like ‘it’, ‘he’, ‘she’ refer back to nouns, creating continuity.
  • Repetition – of words or phrases reinforces main ideas.
  • Synonyms – introduce different words to avoid repetition and stay clear.
  • Logical order – so readers can follow thoughts easily.

To further improve your writing:

  • Read out loud – awkward sentences and gaps in flow become clear.
  • Use sentence variety – simple, compound and complex sentences.
  • Take breaks – get fresh perspectives on improvement areas.
  • Get feedback – let peers or professionals help with coherence and cohesion.

These suggestions help readers follow ideas without confusion. They create clear connections and a seamless experience.

Practice and Tips for Success

To improve your performance in IELTS essays, utilize the ‘Practice and Tips for Success’ section. Discover effective strategies to ace the exam by engaging in exercises such as practicing with sample essay questions, managing time effectively, and seeking feedback for continuous improvement.

Practicing with sample essay questions

Analyze the prompt. Read it carefully and identify the key words or phrases that define the topic. Grasping the prompt helps form a focused thesis statement.

Research and gather info. Do thorough research to gather pertinent facts from reliable sources. Make notes and organize them based on arguments or counterarguments.

Plan your essay. Put together an outline or structure before you start writing. This ensures coherence and logical progression of ideas.

Write a draft. Use the notes and outline as a guide and begin writing your essay. Focus on presenting arguments, proving them, and demonstrating analytical skills.

Review and revise. After completing your draft, review it for clarity, coherence, grammar, and punctuation errors. Make the needed changes to strengthen your essay’s content and flow.

Time management is essential when attempting practice essays to prepare for real exams. Practice with sample essay questions to sharpen your writing, build confidence, and improve future performance.

Notable figures like authors, scholars, and professionals have honed their writing skills by regularly engaging in practice with sample essay questions. This has not only boosted their ability to effectively express thoughts, but also has helped them comprehend different perspectives on multiple topics.

Managing time effectively

Don’t let missed opportunities haunt you! Take control of your time and reap the rewards. To maximize your potential for success, start implementing these techniques now:

  • Prioritize tasks. Identify most important ones first . This ensures time is spent on activities that have the greatest impact.
  • Set goals. Establish clear goals for each day or week . This provides you with a sense of direction and purpose.
  • Create a schedule. Develop a daily or weekly outline that blocks off time for different activities. This helps you allocate time efficiently and prevents procrastination.
  • Avoid multitasking. Studies show this decreases productivity. Focus on one task at a time to ensure quality work.

Productivity tools such as task management apps or timers can help. Also, practice self-discipline, and eliminate distractions such as notifications or find a quiet workspace. This enhances focus and concentration. Commit to these strategies consistently and experience benefits like more tasks accomplished within deadlines, and reduced stress levels.

Seeking feedback and improvement

Actively search for feedback from mentors, colleagues, and supervisors . Accept criticism as a chance for progress, not personally. Ask for feedback on a project or performance, to get helpful feedback. Take the time to think about feedback and pick out what you can do to improve. Even with positive feedback, keep searching for ways to develop.

Remember, requesting feedback needs openness and humility . Showing you want to learn is a sign of growth.

Pro Tip: Listen closely to feedback, rather than defending yourself. This will help you understand the point of view and make improvements.

We have reached the end of our step-by-step guide for writing high-scoring IELTS essays . Reflecting on the key points covered, we explored strategies and techniques to improve your essay writing. Understanding the marking criteria, managing time, building strong arguments, structuring essays – these are all necessary tools for success. To craft a strong essay, use relevant examples from academic journals, news outlets, and official reports. Demonstrate critical thinking by analyzing perspectives on a topic. Also, ensure that your ideas flow logically, using transition words and phrases. Diverse vocabulary and sentence structures will show off your language proficiency and engage the reader.

It is important to note that practice is key to success in the IELTS exam . Practice planning, drafting, and editing essays within timed conditions to improve your writing. Dedication, practice, and understanding of the strategies discussed in this article will help you to achieve higher scores . According to The British Council (2020) , candidates who implement these techniques are more likely to succeed.

Frequently Asked Questions

FAQ 1: What is the key to writing high-scoring IELTS essays? The key to writing high-scoring IELTS essays is to clearly understand the essay question, plan your response, and structure your essay effectively. Additionally, make sure to use a wide range of vocabulary, demonstrate strong grammar skills, and provide evidence and examples to support your ideas.

FAQ 2: How can I improve my vocabulary for IELTS essays? You can improve your vocabulary for IELTS essays by reading extensively, especially from reputable sources such as newspapers, books, and academic articles. Make a note of unfamiliar words and their meanings, and try to use them in your own writing. Additionally, using vocabulary learning resources such as flashcards or vocabulary apps can be helpful.

FAQ 3: Are there any specific essay structures I should follow? Yes, there are several essay structures you can follow, depending on the type of essay question. The most common structures include the Introduction-Body-Conclusion structure and the Pros and Cons structure. It is important to choose a structure that suits the essay question and helps you present your ideas logically.

FAQ 4: How can I improve my grammar skills for IELTS essays? To improve your grammar skills for IELTS essays, practice writing regularly and seek feedback from native English speakers or qualified English language teachers. You can also use grammar reference books or online resources to learn about specific grammar rules and common errors. Take note of your frequent errors and work on them systematically.

FAQ 5: How long should an IELTS essay be? An IELTS essay should be between 250 and 300 words long. Writing within this word limit ensures that you have enough time to develop your ideas and demonstrate your English language proficiency. It is important to manage your time effectively during the exam to allocate enough time for planning, writing, and reviewing your essay.

FAQ 6: How can I practice for writing high-scoring IELTS essays? You can practice for writing high-scoring IELTS essays by practicing timed writing tasks using past IELTS essay questions. Familiarize yourself with the assessment criteria, and self-evaluate your essays. Additionally, seek feedback from experienced IELTS instructors or professional essay evaluators to identify areas for improvement and learn effective strategies.

Content Protection by DMCA.com

Leave a Reply Cancel reply

Banner

Share Article:

You might also like

Can I Crack IELTS in a Week?

Can I Crack IELTS in a Week?: Strategies to Achieve IELTS Score

Does IELTS Coaching Help Truly in Enhancing Performance?

Does IELTS Coaching Help Truly in Enhancing Performance?

How to Crack IELTS Exam in 14 Days?

How to Crack IELTS Exam in 14 Days?: Proven Success Strategies

Other stories, ielts speaking test tips and strategies, top online resources for ielts preparation.

en_US

Student sat writing at a table. Photo by mentatdgt from Pexels

Essay and dissertation writing skills

Planning your essay

Writing your introduction

Structuring your essay

  • Writing essays in science subjects
  • Brief video guides to support essay planning and writing
  • Writing extended essays and dissertations
  • Planning your dissertation writing time

Structuring your dissertation

  • Top tips for writing longer pieces of work

Advice on planning and writing essays and dissertations

University essays differ from school essays in that they are less concerned with what you know and more concerned with how you construct an argument to answer the question. This means that the starting point for writing a strong essay is to first unpick the question and to then use this to plan your essay before you start putting pen to paper (or finger to keyboard).

A really good starting point for you are these short, downloadable Tips for Successful Essay Writing and Answering the Question resources. Both resources will help you to plan your essay, as well as giving you guidance on how to distinguish between different sorts of essay questions. 

You may find it helpful to watch this seven-minute video on six tips for essay writing which outlines how to interpret essay questions, as well as giving advice on planning and structuring your writing:

Different disciplines will have different expectations for essay structure and you should always refer to your Faculty or Department student handbook or course Canvas site for more specific guidance.

However, broadly speaking, all essays share the following features:

Essays need an introduction to establish and focus the parameters of the discussion that will follow. You may find it helpful to divide the introduction into areas to demonstrate your breadth and engagement with the essay question. You might define specific terms in the introduction to show your engagement with the essay question; for example, ‘This is a large topic which has been variously discussed by many scientists and commentators. The principal tension is between the views of X and Y who define the main issues as…’ Breadth might be demonstrated by showing the range of viewpoints from which the essay question could be considered; for example, ‘A variety of factors including economic, social and political, influence A and B. This essay will focus on the social and economic aspects, with particular emphasis on…..’

Watch this two-minute video to learn more about how to plan and structure an introduction:

The main body of the essay should elaborate on the issues raised in the introduction and develop an argument(s) that answers the question. It should consist of a number of self-contained paragraphs each of which makes a specific point and provides some form of evidence to support the argument being made. Remember that a clear argument requires that each paragraph explicitly relates back to the essay question or the developing argument.

  • Conclusion: An essay should end with a conclusion that reiterates the argument in light of the evidence you have provided; you shouldn’t use the conclusion to introduce new information.
  • References: You need to include references to the materials you’ve used to write your essay. These might be in the form of footnotes, in-text citations, or a bibliography at the end. Different systems exist for citing references and different disciplines will use various approaches to citation. Ask your tutor which method(s) you should be using for your essay and also consult your Department or Faculty webpages for specific guidance in your discipline. 

Essay writing in science subjects

If you are writing an essay for a science subject you may need to consider additional areas, such as how to present data or diagrams. This five-minute video gives you some advice on how to approach your reading list, planning which information to include in your answer and how to write for your scientific audience – the video is available here:

A PDF providing further guidance on writing science essays for tutorials is available to download.

Short videos to support your essay writing skills

There are many other resources at Oxford that can help support your essay writing skills and if you are short on time, the Oxford Study Skills Centre has produced a number of short (2-minute) videos covering different aspects of essay writing, including:

  • Approaching different types of essay questions  
  • Structuring your essay  
  • Writing an introduction  
  • Making use of evidence in your essay writing  
  • Writing your conclusion

Extended essays and dissertations

Longer pieces of writing like extended essays and dissertations may seem like quite a challenge from your regular essay writing. The important point is to start with a plan and to focus on what the question is asking. A PDF providing further guidance on planning Humanities and Social Science dissertations is available to download.

Planning your time effectively

Try not to leave the writing until close to your deadline, instead start as soon as you have some ideas to put down onto paper. Your early drafts may never end up in the final work, but the work of committing your ideas to paper helps to formulate not only your ideas, but the method of structuring your writing to read well and conclude firmly.

Although many students and tutors will say that the introduction is often written last, it is a good idea to begin to think about what will go into it early on. For example, the first draft of your introduction should set out your argument, the information you have, and your methods, and it should give a structure to the chapters and sections you will write. Your introduction will probably change as time goes on but it will stand as a guide to your entire extended essay or dissertation and it will help you to keep focused.

The structure of  extended essays or dissertations will vary depending on the question and discipline, but may include some or all of the following:

  • The background information to - and context for - your research. This often takes the form of a literature review.
  • Explanation of the focus of your work.
  • Explanation of the value of this work to scholarship on the topic.
  • List of the aims and objectives of the work and also the issues which will not be covered because they are outside its scope.

The main body of your extended essay or dissertation will probably include your methodology, the results of research, and your argument(s) based on your findings.

The conclusion is to summarise the value your research has added to the topic, and any further lines of research you would undertake given more time or resources. 

Tips on writing longer pieces of work

Approaching each chapter of a dissertation as a shorter essay can make the task of writing a dissertation seem less overwhelming. Each chapter will have an introduction, a main body where the argument is developed and substantiated with evidence, and a conclusion to tie things together. Unlike in a regular essay, chapter conclusions may also introduce the chapter that will follow, indicating how the chapters are connected to one another and how the argument will develop through your dissertation.

For further guidance, watch this two-minute video on writing longer pieces of work . 

Systems & Services

Access Student Self Service

  • Student Self Service
  • Self Service guide
  • Registration guide
  • Libraries search
  • OXCORT - see TMS
  • GSS - see Student Self Service
  • The Careers Service
  • Oxford University Sport
  • Online store
  • Gardens, Libraries and Museums
  • Researchers Skills Toolkit
  • LinkedIn Learning (formerly Lynda.com)
  • Access Guide
  • Lecture Lists
  • Exam Papers (OXAM)
  • Oxford Talks

Latest student news

new twitter x logo

CAN'T FIND WHAT YOU'RE LOOKING FOR?

Try our extensive database of FAQs or submit your own question...

Ask a question

Would you like to explore a topic?

  • LEARNING OUTSIDE OF SCHOOL

Or read some of our popular articles?

Free downloadable english gcse past papers with mark scheme.

  • 19 May 2022

The Best Free Homeschooling Resources UK Parents Need to Start Using Today

  • Joseph McCrossan
  • 18 February 2022

How Will GCSE Grade Boundaries Affect My Child’s Results?

  • Akshat Biyani
  • 13 December 2021

How to Write the Perfect Essay: A Step-By-Step Guide for Students

 alt=

  • June 2, 2022

how to score good in essay writing

  • What is an essay? 

What makes a good essay?

Typical essay structure, 7 steps to writing a good essay, a step-by-step guide to writing a good essay.

Whether you are gearing up for your GCSE coursework submissions or looking to brush up on your A-level writing skills, we have the perfect essay-writing guide for you. 💯

Staring at a blank page before writing an essay can feel a little daunting . Where do you start? What should your introduction say? And how should you structure your arguments? They are all fair questions and we have the answers! Take the stress out of essay writing with this step-by-step guide – you’ll be typing away in no time. 👩‍💻

student-writing

What is an essay?

Generally speaking, an essay designates a literary work in which the author defends a point of view or a personal conviction, using logical arguments and literary devices in order to inform and convince the reader.

So – although essays can be broadly split into four categories: argumentative, expository, narrative, and descriptive – an essay can simply be described as a focused piece of writing designed to inform or persuade. 🤔

The purpose of an essay is to present a coherent argument in response to a stimulus or question and to persuade the reader that your position is credible, believable and reasonable. 👌

So, a ‘good’ essay relies on a confident writing style – it’s clear, well-substantiated, focussed, explanatory and descriptive . The structure follows a logical progression and above all, the body of the essay clearly correlates to the tile – answering the question where one has been posed. 

But, how do you go about making sure that you tick all these boxes and keep within a specified word count? Read on for the answer as well as an example essay structure to follow and a handy step-by-step guide to writing the perfect essay – hooray. 🙌

Sometimes, it is helpful to think about your essay like it is a well-balanced argument or a speech – it needs to have a logical structure, with all your points coming together to answer the question in a coherent manner. ⚖️

Of course, essays can vary significantly in length but besides that, they all follow a fairly strict pattern or structure made up of three sections. Lean into this predictability because it will keep you on track and help you make your point clearly. Let’s take a look at the typical essay structure:  

#1 Introduction

Start your introduction with the central claim of your essay. Let the reader know exactly what you intend to say with this essay. Communicate what you’re going to argue, and in what order. The final part of your introduction should also say what conclusions you’re going to draw – it sounds counter-intuitive but it’s not – more on that below. 1️⃣

Make your point, evidence it and explain it. This part of the essay – generally made up of three or more paragraphs depending on the length of your essay – is where you present your argument. The first sentence of each paragraph – much like an introduction to an essay – should summarise what your paragraph intends to explain in more detail. 2️⃣

#3 Conclusion

This is where you affirm your argument – remind the reader what you just proved in your essay and how you did it. This section will sound quite similar to your introduction but – having written the essay – you’ll be summarising rather than setting out your stall. 3️⃣

No essay is the same but your approach to writing them can be. As well as some best practice tips, we have gathered our favourite advice from expert essay-writers and compiled the following 7-step guide to writing a good essay every time. 👍

#1 Make sure you understand the question

#2 complete background reading.

#3 Make a detailed plan 

#4 Write your opening sentences 

#5 flesh out your essay in a rough draft, #6 evidence your opinion, #7 final proofread and edit.

Now that you have familiarised yourself with the 7 steps standing between you and the perfect essay, let’s take a closer look at each of those stages so that you can get on with crafting your written arguments with confidence . 

This is the most crucial stage in essay writing – r ead the essay prompt carefully and understand the question. Highlight the keywords – like ‘compare,’ ‘contrast’ ‘discuss,’ ‘explain’ or ‘evaluate’ – and let it sink in before your mind starts racing . There is nothing worse than writing 500 words before realising you have entirely missed the brief . 🧐

Unless you are writing under exam conditions , you will most likely have been working towards this essay for some time, by doing thorough background reading. Re-read relevant chapters and sections, highlight pertinent material and maybe even stray outside the designated reading list, this shows genuine interest and extended knowledge. 📚

#3 Make a detailed plan

Following the handy structure we shared with you above, now is the time to create the ‘skeleton structure’ or essay plan. Working from your essay title, plot out what you want your paragraphs to cover and how that information is going to flow. You don’t need to start writing any full sentences yet but it might be useful to think about the various quotes you plan to use to substantiate each section. 📝

Having mapped out the overall trajectory of your essay, you can start to drill down into the detail. First, write the opening sentence for each of the paragraphs in the body section of your essay. Remember – each paragraph is like a mini-essay – the opening sentence should summarise what the paragraph will then go on to explain in more detail. 🖊️

Next, it's time to write the bulk of your words and flesh out your arguments. Follow the ‘point, evidence, explain’ method. The opening sentences – already written – should introduce your ‘points’, so now you need to ‘evidence’ them with corroborating research and ‘explain’ how the evidence you’ve presented proves the point you’re trying to make. ✍️

With a rough draft in front of you, you can take a moment to read what you have written so far. Are there any sections that require further substantiation? Have you managed to include the most relevant material you originally highlighted in your background reading? Now is the time to make sure you have evidenced all your opinions and claims with the strongest quotes, citations and material. 📗

This is your final chance to re-read your essay and go over it with a fine-toothed comb before pressing ‘submit’. We highly recommend leaving a day or two between finishing your essay and the final proofread if possible – you’ll be amazed at the difference this makes, allowing you to return with a fresh pair of eyes and a more discerning judgment. 🤓

If you are looking for advice and support with your own essay-writing adventures, why not t ry a free trial lesson with GoStudent? Our tutors are experts at boosting academic success and having fun along the way. Get in touch and see how it can work for you today. 🎒

1-May-12-2023-09-09-32-6011-AM

Popular posts

Student studying for a English GCSE past paper

  • By Guy Doza

girl learning at home

  • By Joseph McCrossan
  • In LEARNING TRENDS

gcse exam paper

  • By Akshat Biyani

homeschooling mum and child

4 Surprising Disadvantages of Homeschooling

  • By Andrea Butler

student taking gcse exam

What are the Hardest GCSEs? Should You Avoid or Embrace Them?

  • By Clarissa Joshua

1:1 tutoring to unlock the full potential of your child

More great reads:.

Benefits of Reading: Positive Impacts for All Ages Everyday

Benefits of Reading: Positive Impacts for All Ages Everyday

  • May 26, 2023

15 of the Best Children's Books That Every Young Person Should Read

15 of the Best Children's Books That Every Young Person Should Read

  • By Sharlene Matharu
  • March 2, 2023

Ultimate School Library Tips and Hacks

Ultimate School Library Tips and Hacks

  • By Natalie Lever
  • March 1, 2023

Book a free trial session

Sign up for your free tutoring lesson..

how to score good in essay writing

TOEFL Prep Online Guides and Tips

2 perfect-scoring toefl writing samples, analyzed.

how to score good in essay writing

The Writing section can be the most daunting section of the TOEFL. You’ll have 50 minutes to write two complete essays that must meet multiple requirements and show a strong grasp of English. Knowing what graders are looking for and reviewing TOEFL Writing samples can go a long way towards helping you get a high score on this section.

This guide will go over both of the TOEFL Writing tasks, explain how they’re graded, go over a high-scoring TOEFL Writing sample for each essay type, and end with TOEFL Writing examples for you to analyze.

The TOEFL Writing Section

The TOEFL Writing section is 50 minutes long (broken into two parts) and contains two tasks: Integrated Writing and Independent Writing. It’s the fourth and final section of the exam. You’ll type both essays on the computer. The next two sections will explain the format and requirements of each of the writing tasks as well as how they will be scored.

TOEFL Integrated Writing Task

The Integrated Writing task requires you to use listening, reading, and writing skills.  For this task, you’ll have three minutes to read a short passage, then you’ll listen to a short (approximately two-minute long) audio clip of a speaker discussing the same topic the written passage covers.

You’ll have 20 minutes to plan and write a response that references both of these sources in order to answer the question . You won’t discuss your own opinion. During the writing time, you’ll be able to look at the written passage again, but you won’t be able to re-hear the audio clip. You’ll be able to take notes while you listen to it though. The suggested response length for this task is 150-225 words.

By the way: we have built the world's best online TOEFL course . Get online practice (TPO-sytle!) and individual grading and feedback on Speaking and Writing.

Learn how you can improve your TOEFL score by 15 points today .

For this essay, you’ll be graded on the quality of your writing as well as how well your response represents the main points of the audio clip and written passage and how they relate to each other.  Each essay receives a score from 0-5. For both essay types, you can check out the complete rubric used for official grading. Below are key points from the Integrated Writing rubric. ( You can view complete rubric for both essays here .)

body_writing

TOEFL Independent Writing Task

For the Independent Writing task, you’ll have receive a question on a particular topic or issue. You’ll have 30 minutes to plan and write a response to that topic that explains your opinion on it. You’ll need to give reasons that support your decision. It’s recommended that your response to this task be at least 300 words.

You’ll be graded on how well you develop your ideas, how well your essay is organized, and how accurately you use English to express your ideas.

Top-Scoring TOEFL Integrated Writing Sample

Below is an official TOEFL Integrated Writing sample question and as well as an essay response that received a score of 5.  It includes a written passage, the transcript of a conversation (which would be an audio recording on the actual TOEFL, and the essay prompt.  After the prompt is an example of a top-scoring essay. You can read the essay in full, then read our comments on what exactly about this essay gives it a top score.

Integrated Writing Example Prompt

You have three minutes to read the following passage and take notes. In many organizations, perhaps the best way to approach certain new projects is to assemble a group of people into a team. Having a team of people attack a project offers several advantages. First of all, a group of people has a wider range of knowledge, expertise, and skills than any single individual is likely to possess. Also, because of the numbers of people involved and the greater resources they possess, a group can work more quickly in response to the task assigned to it and can come up with highly creative solutions to problems and issues. Sometimes these creative solutions come about because a group is more likely to make risky decisions that an individual might not undertake. This is because the group spreads responsibility for a decision to all the members and thus no single individual can be held accountable if the decision turns out to be wrong.

Taking part in a group process can be very rewarding for members of the team. Team members who have a voice in making a decision will no doubt feel better about carrying out the work that is entailed by that decision than they might doing work that is imposed on them by others. Also, the individual team member has a much better chance to “shine,” to get his or her contributions and ideas not only recognized but recognized as highly significant, because a team’s overall results can be more far-reaching and have greater impact than what might have otherwise been possible for the person to accomplish or contribute working alone.

Now listen to part of a lecture on the topic you just read about.

(Professor) Now I want to tell you about what one company found when it decided that it would turn over some of its new projects to teams of people, and make the team responsible for planning the projects and getting the work done. After about six months, the company took a look at how well the teams performed. On virtually every team, some members got almost a “free ride” … they didn’t contribute much at all, but if their team did a good job, they nevertheless benefited from the recognition the team got. And what about group members who worked especially well and who provided a lot of insight on problems and issues? Well…the recognition for a job well done went to the group as a whole, no names were named. So it won’t surprise you to learn that when the real contributors were asked how they felt about the group process, their attitude was just the opposite of what the reading predicts. Another finding was that some projects just didn’t move very quickly. Why? Because it took so long to reach consensus…it took many, many meetings to build the agreement among group members about how they would move the project along. On the other hand, there were other instances where one or two people managed to become very influential over what their group did. Sometimes when those influencers said “That will never work” about an idea the group was developing, the idea was quickly dropped instead of being further discussed. And then there was another occasion when a couple influencers convinced the group that a plan of theirs was “highly creative.” And even though some members tried to warn the rest of the group that the project was moving in directions that might not work, they were basically ignored by other group members. Can you guess the ending to *this* story? When the project failed, the blame was placed on all the members of the group.

You have 20 minutes to plan and write your response. Your response will be judged on the basis of the quality of your writing and on how well your response presents the points in the lecture and their relationship to the reading passage. Typically, an effective response will be 150 to 225 words.

Summarize the points made in the lecture you just heard, explaining how they cast doubt on points made in the reading.

TOEFL Integrated Writing Sample Essay

The lecturer talks about research conducted by a firm that used the group system to handle their work. He says that the theory stated in the passage was very different and somewhat inaccurate when compared to what happened for real.

First, some members got free rides. That is, some didn’t work hard but gotrecognition for the success nontheless. This also indicates that people who worked hard was not given recognition they should have got. In other words, they weren’t given the oppotunity to “shine”. This derectly contradicts what the passage indicates.

Second, groups were slow in progress. The passage says that groups are nore responsive than individuals because of the number of people involved and their aggregated resources. However, the speaker talks about how the firm found out that groups were slower than individuals in dicision making. Groups needed more time for meetings, which are neccesary procceedures in decision making. This was another part where experience contradicted theory.

Third, influetial people might emerge, and lead the group towards glory or failure. If the influent people are going in the right direction there would be no problem. But in cases where they go in the wrong direction, there is nobody that has enough influence to counter the decision made. In other words, the group might turn into a dictatorship, with the influential party as the leader, and might be less flexible in thinking. They might become one-sided, and thus fail to succeed.

TOEFL Writing Sample Analysis

There are three key things this TOEFL example essay does that results in its high score:

  • Clearly presents main points
  • Contrasts lecture and reading points
  • Few grammatical/spelling errors

This essay clearly organizes the three main points made in the lecture,  which is what the first part of the prompt asked for. (“Summarize the points made in the lecture you just heard.”) There is one paragraph for each point, and the point is clearly stated within the first sentence of the paragraph followed by specific details from the lecture. This organization makes it easy to follow the writer’s thinking and see that they understood the lecture.

Additionally, the essay clearly contrasts points made in the lecture with points made in the reading. Each main paragraph includes an example of how the two are different, and the writer makes these differences clear by using words and phrases such as “however” and “this directly contradicts.” Stating these differences answers the second part of the prompt (“explain how they cast doubt on points made in the reading”) and shows that the writer understood both the lecture and reading well enough to differentiate between the two.

Finally, there are only a few minor spelling and grammar error s, the most noticeable of which is the incorrect use of the word “influent” in the final paragraph (it should be “influential”), and they do not detract from the meaning of the essay. This writer shows a strong grasp of the English language, a key TOEFL skill.

This essay shows that the writer understood the main points of both the lecture and the reading well enough to both describe them and contrast them. That, along with the relatively few mechanical errors, gives the essay a top score.

body_notebook

Top-Scoring Independent TOEFL Writing Sample

Below is an official Independent Writing prompt and top-scoring sample essay. Beneath the essay we analyze what about the essay resulted in it receiving a top score.

Independent Writing Example Prompt

Directions Read the question below. You have 30 minutes to plan, write, and revise your essay. Typically, an effective essay will contain a minimum of 300 words.

Do you agree or disagree with the following statement? Always telling the truth is the most important consideration in any relationship. Use specific reasons and examples to support your answer.

Independent TOEFL Writing Sample Essay

the traditional virtue of telling the truth in all situations is increasingly doubted by many in today’s world. many believe that telling the truth is not always the best policy when dealing with people. moreover, the line of a “truth” is becoming more and more vague. this essay will explore the importance of telling the truth in relationships between people.

we all understand that often the truth is offending and may not be a very nice thing to both hear or say. lies or white lies often have their advantages. the manipulation of white lies is the most obvious the business world. how many times have we heard that some product is “the finest” or “the cheapest”? how many times have we heard that products have such and such “magical functions”? advertising is about persuasion, and many would agree that if a company is to tell the absolute truth about it’s products, no one would be interested in even having a look at the products.

the same logic applies to human relationships. if your friend had worn a newly purchased dress on her birthday and energetically asked you if it was a worthy buy, would you freely express your opinion that you had never seen a dress as the one she’s currently wearing? and spoil her birthday? unarguably, hiding(entirely or particially) the truth in some situations can be quite handy indeed. confrontations and disputes can seemingly be avoided.

however, there is always the risk factor of the truth emerging sooner or later when telling an untruth. the basic trust in any relationships(businessman/customer, friends, parents/children) will be blotched, and would have an impact on the future relationship between both parties. the story of the “the boy who cried wolf” fully illustrates the consequenes of telling untruths. no one will believe you when you’re telling the truth. your word will have no weighting.

in addition, another “bad factor” of telling untruths is that you have absolutely no control over when the truth(of previous untruths) will emerge. untruths breed pain in both parties: tears when the truth is uncovered after a period of time; fear and the burden of sharing a “secret”. in the long run, it seems that hiding the truth is not beneficial to either party. everyone hates betrayal. even if it is the trend to occasionally hide the truth in relationships, it is strongly recommended that not to follow that trend as the risk and the consequences of the truth unfolded overwhelms the minimal advantages one can derive from not telling the truth. afterall, it is understood that relationships are founded on “trust” which goes hand in hand with “truth”. indeed telling the truth is the most important consideration in any relationship between people. always.

There are three key things this essay does that results in its high score, and each is explained in more detail below.

  • Is well organized
  • Uses specific examples

The essay, like the first one, is well organized. The writer’s position is clear within the first few sentences, and the rest of the essay elaborates on that position. Each paragraph begins with a new major point that is then explained. This logical flow of ideas is easy for readers to follow and shows that the writer knows how to set up a clear argument.

Another reason the essay received a top score is because the writer used specific examples to make her point. By using specific examples, such as a friend buying a new outfit and asking your opinion and phrases businesses use to sell products, the writer makes her argument stronger and more concrete.

Finally, despite the lack of capitalization throughout the essay, there are few spelling and grammatical errors, and the ones that do exist don’t detract from the meaning of the essay or make it confusing to understand. This shows a strong command of English and the ability to write in-depth essays that are clear and get their point across.

body_typing

Where to Find More TOEFL Writing Samples

Below are a list of other places, official and unofficial, where you can find TOEFL Writing examples. You can use these examples to get a better idea of what a high-scoring essay looks like and what graders are looking for on the Writing section.

Official Resources

Official resources are always the best to use since you can be sure the essay prompts are accurate and the sample essays were accurately scored.

TOEFL iBT Writing Sample Responses

This resource contains several sample essays (including the two sample responses used above). The essays from on this site received different scores as well as analysis of why they received the score they did. This can be helpful if you want more information on, say, what differentiates an essay that got a “5” from an essay that got a “4”.

TOEFL iBT Test Questions

This is a complete practice TOEFL, but it does include several sample essays along with score explanations so you can get a more in-depth look at how and why different essays received the scores they did.

Unofficial Resources

There are numerous unofficial TOEFL writing samples out there, of varying quality. Below are two of the best.

TOEFL Resources

This site has several dozen sample essays for both the Integrated and Independent Writing topics. There’s no scoring analysis, but you do get a good variety of essay topics and essay samples so that you can get a sense of how to approach different essay prompts.

Good Luck TOEFL

Good Luck TOEFL has seven sample Independent Writing essays (no Integrated Writing). There’s no scoring analysis, but the essays and prompts are similar to official TOEFL essay topics.

Review: Analyzing TOEFL Writing Examples

Writing can be a particularly tricky TOEFL section, and seeing TOEFL Writing samples can go a long way to helping you feel more confident. For TOEFL Writing, you’ll need to write two essays, the Integrated Writing Task and the Independent Writing Task.  Looking over the rubrics for both these essays and understanding what graders will be looking for can help you understand what to include in your own essays.

Both essays are scored on a scale of 0-5. Top-scoring essays generally need to have good organization, specific examples, answer the prompt completely, and minor spelling and grammar errors. It can also be useful to review other TOEFL writing samples to get a better idea of what a great TOEFL essay looks like.

What’s Next?

Looking for more information on the TOEFL Writing section? Learn all the tips you need to know in order to ace TOEFL Writing!

Want more tips on how to prepare for TOEFL Writing questions? Check out our guide to the best ways to practice for TOEFL Writing!

Want to improve your TOEFL score by 15 points?

Registration is now open for our best TOEFL course . We guarantee your money back if you don't improve your TOEFL score by 15 points or more.

PrepScholar TOEFL is online and it features thousands of practice questions and 1-on-1 Speaking and Writing review and feedback.

Looking for a great TOEFL prep book?  A good prep book can be the most important study tool you use, and we have information on all the best TOEFL prep books you should consider.

Ready to improve your TOEFL score by 15 points?

how to score good in essay writing

Author: Christine Sarikas

Christine graduated from Michigan State University with degrees in Environmental Biology and Geography and received her Master's from Duke University. In high school she scored in the 99th percentile on the SAT and was named a National Merit Finalist. She has taught English and biology in several countries. View all posts by Christine Sarikas

how to score good in essay writing

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

A (Very) Simple Way to Improve Your Writing

  • Mark Rennella

how to score good in essay writing

It’s called the “one-idea rule” — and any level of writer can use it.

The “one idea” rule is a simple concept that can help you sharpen your writing, persuade others by presenting your argument in a clear, concise, and engaging way. What exactly does the rule say?

  • Every component of a successful piece of writing should express only one idea.
  • In persuasive writing, your “one idea” is often the argument or belief you are presenting to the reader. Once you identify what that argument is, the “one-idea rule” can help you develop, revise, and connect the various components of your writing.
  • For instance, let’s say you’re writing an essay. There are three components you will be working with throughout your piece: the title, the paragraphs, and the sentences.
  • Each of these parts should be dedicated to just one idea. The ideas are not identical, of course, but they’re all related. If done correctly, the smaller ideas (in sentences) all build (in paragraphs) to support the main point (suggested in the title).

Ascend logo

Where your work meets your life. See more from Ascend here .

Most advice about writing looks like a long laundry list of “do’s and don’ts.” These lists can be helpful from time to time, but they’re hard to remember … and, therefore, hard to depend on when you’re having trouble putting your thoughts to paper. During my time in academia, teaching composition at the undergraduate and graduate levels, I saw many people struggle with this.

how to score good in essay writing

  • MR Mark Rennella is Associate Editor at HBP and has published two books, Entrepreneurs, Managers, and Leaders and The Boston Cosmopolitans .  

Partner Center

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

A Message From the Assistant Director of Content Development 

The Purdue OWL® is committed to supporting  students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.

Please don't hesitate to contact us via our contact page  if you have any questions or comments.

All the best,

Social Media

Facebook twitter.

Creating and Scoring Essay Tests

FatCamera / Getty Images

  • Tips & Strategies
  • An Introduction to Teaching
  • Policies & Discipline
  • Community Involvement
  • School Administration
  • Technology in the Classroom
  • Teaching Adult Learners
  • Issues In Education
  • Teaching Resources
  • Becoming A Teacher
  • Assessments & Tests
  • Elementary Education
  • Secondary Education
  • Special Education
  • Homeschooling
  • M.Ed., Curriculum and Instruction, University of Florida
  • B.A., History, University of Florida

Essay tests are useful for teachers when they want students to select, organize, analyze, synthesize, and/or evaluate information. In other words, they rely on the upper levels of Bloom's Taxonomy . There are two types of essay questions: restricted and extended response.

  • Restricted Response - These essay questions limit what the student will discuss in the essay based on the wording of the question. For example, "State the main differences between John Adams' and Thomas Jefferson's beliefs about federalism," is a restricted response. What the student is to write about has been expressed to them within the question.
  • Extended Response - These allow students to select what they wish to include in order to answer the question. For example, "In Of Mice and Men , was George's killing of Lennie justified? Explain your answer." The student is given the overall topic, but they are free to use their own judgment and integrate outside information to help support their opinion.

Student Skills Required for Essay Tests

Before expecting students to perform well on either type of essay question, we must make sure that they have the required skills to excel. Following are four skills that students should have learned and practiced before taking essay exams:

  • The ability to select appropriate material from the information learned in order to best answer the question.
  • The ability to organize that material in an effective manner.
  • The ability to show how ideas relate and interact in a specific context.
  • The ability to write effectively in both sentences and paragraphs.

Constructing an Effective Essay Question

Following are a few tips to help in the construction of effective essay questions:

  • Begin with the lesson objectives in mind. Make sure to know what you wish the student to show by answering the essay question.
  • Decide if your goal requires a restricted or extended response. In general, if you wish to see if the student can synthesize and organize the information that they learned, then restricted response is the way to go. However, if you wish them to judge or evaluate something using the information taught during class, then you will want to use the extended response.
  • If you are including more than one essay, be cognizant of time constraints. You do not want to punish students because they ran out of time on the test.
  • Write the question in a novel or interesting manner to help motivate the student.
  • State the number of points that the essay is worth. You can also provide them with a time guideline to help them as they work through the exam.
  • If your essay item is part of a larger objective test, make sure that it is the last item on the exam.

Scoring the Essay Item

One of the downfalls of essay tests is that they lack in reliability. Even when teachers grade essays with a well-constructed rubric, subjective decisions are made. Therefore, it is important to try and be as reliable as possible when scoring your essay items. Here are a few tips to help improve reliability in grading:

  • Determine whether you will use a holistic or analytic scoring system before you write your rubric . With the holistic grading system, you evaluate the answer as a whole, rating papers against each other. With the analytic system, you list specific pieces of information and award points for their inclusion.
  • Prepare the essay rubric in advance. Determine what you are looking for and how many points you will be assigning for each aspect of the question.
  • Avoid looking at names. Some teachers have students put numbers on their essays to try and help with this.
  • Score one item at a time. This helps ensure that you use the same thinking and standards for all students.
  • Avoid interruptions when scoring a specific question. Again, consistency will be increased if you grade the same item on all the papers in one sitting.
  • If an important decision like an award or scholarship is based on the score for the essay, obtain two or more independent readers.
  • Beware of negative influences that can affect essay scoring. These include handwriting and writing style bias, the length of the response, and the inclusion of irrelevant material.
  • Review papers that are on the borderline a second time before assigning a final grade.
  • Utilizing Extended Response Items to Enhance Student Learning
  • Study for an Essay Test
  • How to Create a Rubric in 6 Steps
  • Top 10 Tips for Passing the AP US History Exam
  • UC Personal Statement Prompt #1
  • Tips to Create Effective Matching Questions for Assessments
  • Self Assessment and Writing a Graduate Admissions Essay
  • 10 Common Test Mistakes
  • ACT Format: What to Expect on the Exam
  • The Computer-Based GED Test
  • GMAT Exam Structure, Timing, and Scoring
  • Tips to Cut Writing Assignment Grading Time
  • 5 Tips for a College Admissions Essay on an Important Issue
  • Ideal College Application Essay Length
  • SAT Sections, Sample Questions and Strategies
  • What You Need to Know About the Executive Assessment

HOWTO: 3 Easy Steps to Grading Student Essays

  •  All topics A-Z
  •  Grammar
  •  Vocabulary
  •  Speaking
  •  Reading
  •  Listening
  •  Writing
  •  Pronunciation
  •  Virtual Classroom
  • Worksheets by season
  •  600 Creative Writing Prompts
  •  Warmers, fillers & ice-breakers
  •  Coloring pages to print
  •  Flashcards
  •  Classroom management worksheets
  •  Emergency worksheets
  •  Revision worksheets
  • Resources we recommend

The next step is to take each of the other criteria and define success for each of those, assigning a value to A, B, C and D papers. Those definitions then go into the rubric in the appropriate locations to complete the chart.

Each of the criteria will score points for the essay. The descriptions in the first column are each worth 4 points, the second column 3 points, the third 2 points and the fourth 1 point.

What is the grading process?

Now that your criteria are defined, grading the essay is easy. When grading a student essay with a rubric, it is best to read through the essay once before evaluating for grades . Then reading through the piece a second time, determine where on the scale the writing sample falls for each of the criteria. If the student shows excellent grammar, good organization and a good overall effect, he would score a total of ten points. Divide that by the total criteria, three in this case, and he finishes with a 3.33. which on a four-point scale is a B+. If you use five criteria to evaluate your essays, divide the total points scored by five to determine the student’s grade.

Once you have written your grading rubric, you may decide to share your criteria with your students.

If you do, they will know exactly what your expectations are and what they need to accomplish to get the grade they desire. You may even choose to make a copy of the rubric for each paper and circle where the student lands for each criterion. That way, each person knows where he needs to focus his attention to improve his grade. The clearer your expectations are and the more feedback you give your students, the more successful your students will be. If you use a rubric in your essay grading, you can communicate those standards as well as make your grading more objective with more practical suggestions for your students. In addition, once you write your rubric you can use it for all future evaluations.

P.S. If you enjoyed this article, please help spread it by clicking one of those sharing buttons below. And if you are interested in more, you should follow our Facebook page where we share more about creative, non-boring ways to teach English.

Like us!

  • Teaching Ideas
  • Classroom Management and Discipline

Entire BusyTeacher Library

Popular articles like this

How to design a rubric that teachers can use and students can understand.

how to score good in essay writing

How to Evaluate Speaking

Faq for writing teachers, but it is clear dealing with the defensive student, do student papers breed in your briefcase 4 methods of managing the paper load, tuning in the feedback 6 strategies for giving students feedback on speaking.

  • Copyright 2007-2021 пїЅ
  • Submit a worksheet
  • Mobile version

More From Forbes

5 strategies to unlock your winning college essay.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

CAMBRIDGE, MASSACHUSETTS - JUNE 29: People walk through the gate on Harvard Yard at the Harvard ... [+] University campus on June 29, 2023 in Cambridge, Massachusetts. The U.S. Supreme Court ruled that race-conscious admission policies used by Harvard and the University of North Carolina violate the Constitution, bringing an end to affirmative action in higher education. (Photo by Scott Eisen/Getty Images)

The college application season is upon us, and high school students everywhere are staring down at one of the most daunting tasks: the college essay. As someone who has guided countless applicants through the admissions process and reviewed admissions essays on an undergraduate admissions committee, I've pinpointed the essential ingredient to a differentiated candidacy—the core of your college admissions X-factor .

The essential ingredient to your college admissions X-factor is your intellectual vitality. Intellectual vitality is your passion for learning and curiosity. By demonstrating and conveying this passion, you can transform an average essay into a compelling narrative that boosts your chances of getting accepted to your top schools. Here are five dynamic strategies to achieve that goal.

Unleash Your Authentic Voice

Admissions officers sift through thousands of essays every year. What stops them in their tracks? An authentic voice that leaps off the page. Forget trying to guess what the admissions committee wants to hear. Focus on being true to yourself. Share your unique perspective, your passions, and your values. Authenticity resonates deeply with application reviewers, making your essay memorable and impactful. You need not have experienced trauma or tragedy to create a strong narrative. You can write about what you know—intellectually or personally—to convey your enthusiasm, creativity, and leadership. Intellectual vitality shines through when you write with personalized reflection about what lights you up.

Weave A Captivating Story

Everyone loves a good story, and your essay is the perfect place to tell yours. The Common Application personal statement has seven choices of prompts to ground the structure for your narrative. The most compelling stories are often about the smallest moments in life, whether it’s shopping at Costco or about why you wear socks that have holes. Think of the Common Application personal statement as a window into your soul rather than a dry list of your achievements or your overly broad event-based life story. Use vivid anecdotes to bring your experiences to life. A well-told story can showcase your growth, highlight your character, and illustrate how you've overcome challenges. Intellectual vitality often emerges in these narratives, revealing how your curiosity and proactive approach to learning have driven you to explore and innovate.

Reflect And Reveal Insights

It's not just about what you've done—it's about what you've learned along the way. When you are writing about a specific event, you can use the STAR framework—situation, task, action, and result (your learning). Focus most of your writing space on the “R” part of this framework to dive deeply into your experiences and reflect on how they've shaped your aspirations and identity.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

The most insightful college-specific supplement essays demonstrate depth of thought, and the ability to connect past experiences with your future life in college and beyond. Reflecting on your intellectual journey signals maturity and a readiness to embrace the college experience. It shows admissions officers that you engage deeply with your studies and are eager to contribute to the academic community.

Highlight Your Contributions—But Don’t Brag

Whether it's a special talent, an unusual hobby, or a unique perspective, showcasing what you can bring to the college environment can make a significant impact. Recognize that the hard work behind the accomplishment is what colleges are interested in learning more about—not retelling about the accomplishment itself. (Honors and activities can be conveyed in another section of the application.) Walk us through the journey to your summit; don’t just take us to the peak and expect us know how you earned it.

Intellectual vitality can be demonstrated through your proactive approach to solving problems, starting new projects, or leading initiatives that reflect your passion for learning and growth. These experiences often have a place in the college-specific supplement essays. They ground the reasons why you want to study in your major and at the particular college.

Perfect Your Prose

Great writing is essential. Anyone can use AI or a thesaurus to assist with an essay, but AI cannot write your story in the way that you tell it. Admissions officers don’t give out extra credit for choosing the longest words with the most amount of syllables.

The best essays have clear, coherent language and are free of errors. The story is clearly and specifically told. After drafting, take the time to revise and polish your writing. Seek feedback from teachers, mentors, or trusted friends, but ensure the final piece is unmistakably yours. A well-crafted essay showcases your diligence and attention to detail—qualities that admissions officers highly value. Intellectual vitality is also reflected in your writing process, showing your commitment to excellence and your enthusiasm for presenting your best self.

Crafting a standout college essay is about presenting your true self in an engaging, reflective, and polished manner while showcasing your intellectual vitality. Happy writing.

Dr. Aviva Legatt

  • Editorial Standards
  • Reprints & Permissions

Media Companies Are Making a Huge Mistake With AI

News organizations rushing to absolve AI companies of theft are acting against their own interests.

A newspaper glitching like a screen

In 2011, I sat in the Guggenheim Museum in New York and watched Rupert Murdoch announce the beginning of a “new digital renaissance” for news. The newspaper mogul was unveiling an iPad-inspired publication called The Daily . “The iPad demands that we completely reimagine our craft,” he said. The Daily shut down the following year, after burning through a reported $40 million.

For as long as I have reported on internet companies, I have watched news leaders try to bend their businesses to the will of Apple, Google, Meta, and more. Chasing tech’s distribution and cash, news firms strike deals to try to ride out the next digital wave. They make concessions to platforms that attempt to take all of the audience (and trust) that great journalism attracts, without ever having to do the complicated and expensive work of the journalism itself. And it never, ever works as planned.

Publishers like News Corp did it with Apple and the iPad, investing huge sums in flashy content that didn’t make them any money but helped Apple sell more hardware. They took payouts from Google to offer their journalism for free through search, only to find that it eroded their subscription businesses. They lined up to produce original video shows for Facebook and to reformat their articles to work well in its new app. Then the social-media company canceled the shows and the app. Many news organizations went out of business.

The Wall Street Journal recently laid off staffers who were part of a Google-funded program to get journalists to post to YouTube channels when the funding for the program dried up . And still, just as the news business is entering a death spiral, these publishers are making all the same mistakes, and more, with AI.

Adrienne LaFrance: The coming humanist renaissance

Publishers are deep in negotiations with tech firms such as OpenAI to sell their journalism as training for the companies’ models. It turns out that accurate, well-written news is one of the most valuable sources for these models, which have been hoovering up humans’ intellectual output without permission. These AI platforms need timely news and facts to get consumers to trust them. And now, facing the threat of lawsuits, they are pursuing business deals to absolve them of the theft. These deals amount to settling without litigation. The publishers willing to roll over this way aren’t just failing to defend their own intellectual property—they are also trading their own hard-earned credibility for a little cash from the companies that are simultaneously undervaluing them and building products quite clearly intended to replace them.

Late last year Axel Springer, the European publisher that owns Politico and Business Insider , sealed a deal with OpenAI reportedly worth tens of millions of dollars over several years. OpenAI has been offering other publishers $1 million to $5 million a year to license their content . News Corp’s new five-year deal with OpenAI is reportedly valued at as much as $250 million in cash and OpenAI credits. Conversations are heating up. As its negotiations with OpenAI failed, The New York Times sued the firm—as did Alden Global Capital, which owns the New York Daily News and the Chicago Tribune . They were brave moves, although I worry that they are likely to end in deals too.

That media companies would rush to do these deals after being so burned by their tech deals of the past is extraordinarily distressing. And these AI partnerships are far worse for publishers. Ten years ago, it was at least plausible to believe that tech companies would become serious about distributing news to consumers. They were building actual products such as Google News. Today’s AI chatbots are so early and make mistakes often. Just this week, Google’s AI suggested you should glue cheese to pizza crust to keep it from slipping off.

OpenAI and others say they are interested in building new models for distributing and crediting news, and many news executives I respect believe them. But it’s hard to see how any AI product built by a tech company would create meaningful new distribution and revenue for news. These companies are using AI to disrupt internet search—to help users find a single answer faster than browsing a few links. So why would anyone want to read a bunch of news articles when an AI could give them the answer, maybe with a tiny footnote crediting the publisher that no user will ever click on?

Companies act in their interest. But OpenAI isn’t even an ordinary business. It’s a nonprofit (with a for-profit arm) that wants to promote general artificial intelligence that benefits humanity—though it can’t quite decide what that means. Even if its executives were ardent believers in the importance of news, helping journalism wouldn’t be on their long-term priority list.

Ross Andersen: Does Sam Altman know what he’s creating?

That’s all before we talk about how to price the news. Ask six publishers how they should be paid by these tech companies, and they will spout off six different ideas. One common idea publishers describe is getting a slice of the tech companies’ revenue based on the percentage of the total training data their publications represent. That’s impossible to track, and there’s no way tech companies would agree to it. Even if they did agree to it, there would be no way to check their calculations—the data sets used for training are vast and inscrutable. And let’s remember that these AI companies are themselves struggling to find a consumer business model. How do you negotiate for a slice of something that doesn’t yet exist?

The news industry finds itself in this dangerous spot, yet again, in part because it lacks a long-term focus and strategic patience. Once-family-owned outlets, such as The Washington Post and the Los Angeles Times , have been sold to interested billionaires. Others, like The Wall Street Journal , are beholden to the public markets and face coming generational change among their owners. Television journalism is at the whims of the largest media conglomerates, which are now looking to slice, dice, and sell off their empires at peak market value. Many large media companies are run by executives who want to live to see another quarter, not set up their companies for the next 50 years. At the same time, the industry’s lobbying power is eroding. A recent congressional hearing on the topic of AI and news was overshadowed by OpenAI CEO Sam Altman’s meeting with House Speaker Mike Johnson . Tech companies clearly have far more clout than media companies.

Things are about to get worse. Legacy and upstart media alike are bleeding money and talent by the week. More outlets are likely to shut down, while others will end up in the hands of powerful individuals using them for their own agendas (see the former GOP presidential candidate Vivek Ramaswamy’s activist play for BuzzFeed ).

The long-term solutions are far from clear. But the answer to this moment is painfully obvious. Publishers should be patient and refrain from licensing away their content for relative pennies. They should protect the value of their work, and their archives. They should have the integrity to say no. It’s simply too early to get into bed with the companies that trained their models on professional content without permission and have no compelling case for how they will help build the news business.

Instead of keeping their business-development departments busy, newsrooms should focus on what they do best: making great journalism and serving it up to their readers. Technology companies aren’t in the business of news. And they shouldn’t be. Publishers have to stop looking to them to rescue the news business. We must start saving ourselves.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 June 2024

Applying large language models for automated essay scoring for non-native Japanese

  • Wenchao Li 1 &
  • Haitao Liu 2  

Humanities and Social Sciences Communications volume  11 , Article number:  723 ( 2024 ) Cite this article

129 Accesses

1 Altmetric

Metrics details

  • Language and linguistics

Recent advancements in artificial intelligence (AI) have led to an increased use of large language models (LLMs) for language assessment tasks such as automated essay scoring (AES), automated listening tests, and automated oral proficiency assessments. The application of LLMs for AES in the context of non-native Japanese, however, remains limited. This study explores the potential of LLM-based AES by comparing the efficiency of different models, i.e. two conventional machine training technology-based methods (Jess and JWriter), two LLMs (GPT and BERT), and one Japanese local LLM (Open-Calm large model). To conduct the evaluation, a dataset consisting of 1400 story-writing scripts authored by learners with 12 different first languages was used. Statistical analysis revealed that GPT-4 outperforms Jess and JWriter, BERT, and the Japanese language-specific trained Open-Calm large model in terms of annotation accuracy and predicting learning levels. Furthermore, by comparing 18 different models that utilize various prompts, the study emphasized the significance of prompts in achieving accurate and reliable evaluations using LLMs.

Similar content being viewed by others

how to score good in essay writing

Accurate structure prediction of biomolecular interactions with AlphaFold 3

how to score good in essay writing

Testing theory of mind in large language models and humans

how to score good in essay writing

Highly accurate protein structure prediction with AlphaFold

Conventional machine learning technology in aes.

AES has experienced significant growth with the advancement of machine learning technologies in recent decades. In the earlier stages of AES development, conventional machine learning-based approaches were commonly used. These approaches involved the following procedures: a) feeding the machine with a dataset. In this step, a dataset of essays is provided to the machine learning system. The dataset serves as the basis for training the model and establishing patterns and correlations between linguistic features and human ratings. b) the machine learning model is trained using linguistic features that best represent human ratings and can effectively discriminate learners’ writing proficiency. These features include lexical richness (Lu, 2012 ; Kyle and Crossley, 2015 ; Kyle et al. 2021 ), syntactic complexity (Lu, 2010 ; Liu, 2008 ), text cohesion (Crossley and McNamara, 2016 ), and among others. Conventional machine learning approaches in AES require human intervention, such as manual correction and annotation of essays. This human involvement was necessary to create a labeled dataset for training the model. Several AES systems have been developed using conventional machine learning technologies. These include the Intelligent Essay Assessor (Landauer et al. 2003 ), the e-rater engine by Educational Testing Service (Attali and Burstein, 2006 ; Burstein, 2003 ), MyAccess with the InterlliMetric scoring engine by Vantage Learning (Elliot, 2003 ), and the Bayesian Essay Test Scoring system (Rudner and Liang, 2002 ). These systems have played a significant role in automating the essay scoring process and providing quick and consistent feedback to learners. However, as touched upon earlier, conventional machine learning approaches rely on predetermined linguistic features and often require manual intervention, making them less flexible and potentially limiting their generalizability to different contexts.

In the context of the Japanese language, conventional machine learning-incorporated AES tools include Jess (Ishioka and Kameda, 2006 ) and JWriter (Lee and Hasebe, 2017 ). Jess assesses essays by deducting points from the perfect score, utilizing the Mainichi Daily News newspaper as a database. The evaluation criteria employed by Jess encompass various aspects, such as rhetorical elements (e.g., reading comprehension, vocabulary diversity, percentage of complex words, and percentage of passive sentences), organizational structures (e.g., forward and reverse connection structures), and content analysis (e.g., latent semantic indexing). JWriter employs linear regression analysis to assign weights to various measurement indices, such as average sentence length and total number of characters. These weights are then combined to derive the overall score. A pilot study involving the Jess model was conducted on 1320 essays at different proficiency levels, including primary, intermediate, and advanced. However, the results indicated that the Jess model failed to significantly distinguish between these essay levels. Out of the 16 measures used, four measures, namely median sentence length, median clause length, median number of phrases, and maximum number of phrases, did not show statistically significant differences between the levels. Additionally, two measures exhibited between-level differences but lacked linear progression: the number of attributives declined words and the Kanji/kana ratio. On the other hand, the remaining measures, including maximum sentence length, maximum clause length, number of attributive conjugated words, maximum number of consecutive infinitive forms, maximum number of conjunctive-particle clauses, k characteristic value, percentage of big words, and percentage of passive sentences, demonstrated statistically significant between-level differences and displayed linear progression.

Both Jess and JWriter exhibit notable limitations, including the manual selection of feature parameters and weights, which can introduce biases into the scoring process. The reliance on human annotators to label non-native language essays also introduces potential noise and variability in the scoring. Furthermore, an important concern is the possibility of system manipulation and cheating by learners who are aware of the regression equation utilized by the models (Hirao et al. 2020 ). These limitations emphasize the need for further advancements in AES systems to address these challenges.

Deep learning technology in AES

Deep learning has emerged as one of the approaches for improving the accuracy and effectiveness of AES. Deep learning-based AES methods utilize artificial neural networks that mimic the human brain’s functioning through layered algorithms and computational units. Unlike conventional machine learning, deep learning autonomously learns from the environment and past errors without human intervention. This enables deep learning models to establish nonlinear correlations, resulting in higher accuracy. Recent advancements in deep learning have led to the development of transformers, which are particularly effective in learning text representations. Noteworthy examples include bidirectional encoder representations from transformers (BERT) (Devlin et al. 2019 ) and the generative pretrained transformer (GPT) (OpenAI).

BERT is a linguistic representation model that utilizes a transformer architecture and is trained on two tasks: masked linguistic modeling and next-sentence prediction (Hirao et al. 2020 ; Vaswani et al. 2017 ). In the context of AES, BERT follows specific procedures, as illustrated in Fig. 1 : (a) the tokenized prompts and essays are taken as input; (b) special tokens, such as [CLS] and [SEP], are added to mark the beginning and separation of prompts and essays; (c) the transformer encoder processes the prompt and essay sequences, resulting in hidden layer sequences; (d) the hidden layers corresponding to the [CLS] tokens (T[CLS]) represent distributed representations of the prompts and essays; and (e) a multilayer perceptron uses these distributed representations as input to obtain the final score (Hirao et al. 2020 ).

figure 1

AES system with BERT (Hirao et al. 2020 ).

The training of BERT using a substantial amount of sentence data through the Masked Language Model (MLM) allows it to capture contextual information within the hidden layers. Consequently, BERT is expected to be capable of identifying artificial essays as invalid and assigning them lower scores (Mizumoto and Eguchi, 2023 ). In the context of AES for nonnative Japanese learners, Hirao et al. ( 2020 ) combined the long short-term memory (LSTM) model proposed by Hochreiter and Schmidhuber ( 1997 ) with BERT to develop a tailored automated Essay Scoring System. The findings of their study revealed that the BERT model outperformed both the conventional machine learning approach utilizing character-type features such as “kanji” and “hiragana”, as well as the standalone LSTM model. Takeuchi et al. ( 2021 ) presented an approach to Japanese AES that eliminates the requirement for pre-scored essays by relying solely on reference texts or a model answer for the essay task. They investigated multiple similarity evaluation methods, including frequency of morphemes, idf values calculated on Wikipedia, LSI, LDA, word-embedding vectors, and document vectors produced by BERT. The experimental findings revealed that the method utilizing the frequency of morphemes with idf values exhibited the strongest correlation with human-annotated scores across different essay tasks. The utilization of BERT in AES encounters several limitations. Firstly, essays often exceed the model’s maximum length limit. Second, only score labels are available for training, which restricts access to additional information.

Mizumoto and Eguchi ( 2023 ) were pioneers in employing the GPT model for AES in non-native English writing. Their study focused on evaluating the accuracy and reliability of AES using the GPT-3 text-davinci-003 model, analyzing a dataset of 12,100 essays from the corpus of nonnative written English (TOEFL11). The findings indicated that AES utilizing the GPT-3 model exhibited a certain degree of accuracy and reliability. They suggest that GPT-3-based AES systems hold the potential to provide support for human ratings. However, applying GPT model to AES presents a unique natural language processing (NLP) task that involves considerations such as nonnative language proficiency, the influence of the learner’s first language on the output in the target language, and identifying linguistic features that best indicate writing quality in a specific language. These linguistic features may differ morphologically or syntactically from those present in the learners’ first language, as observed in (1)–(3).

我-送了-他-一本-书

Wǒ-sòngle-tā-yī běn-shū

1 sg .-give. past- him-one .cl- book

“I gave him a book.”

Agglutinative

彼-に-本-を-あげ-まし-た

Kare-ni-hon-o-age-mashi-ta

3 sg .- dat -hon- acc- give.honorification. past

Inflectional

give, give-s, gave, given, giving

Additionally, the morphological agglutination and subject-object-verb (SOV) order in Japanese, along with its idiomatic expressions, pose additional challenges for applying language models in AES tasks (4).

足-が 棒-に なり-ました

Ashi-ga bo-ni nar-mashita

leg- nom stick- dat become- past

“My leg became like a stick (I am extremely tired).”

The example sentence provided demonstrates the morpho-syntactic structure of Japanese and the presence of an idiomatic expression. In this sentence, the verb “なる” (naru), meaning “to become”, appears at the end of the sentence. The verb stem “なり” (nari) is attached with morphemes indicating honorification (“ます” - mashu) and tense (“た” - ta), showcasing agglutination. While the sentence can be literally translated as “my leg became like a stick”, it carries an idiomatic interpretation that implies “I am extremely tired”.

To overcome this issue, CyberAgent Inc. ( 2023 ) has developed the Open-Calm series of language models specifically designed for Japanese. Open-Calm consists of pre-trained models available in various sizes, such as Small, Medium, Large, and 7b. Figure 2 depicts the fundamental structure of the Open-Calm model. A key feature of this architecture is the incorporation of the Lora Adapter and GPT-NeoX frameworks, which can enhance its language processing capabilities.

figure 2

GPT-NeoX Model Architecture (Okgetheng and Takeuchi 2024 ).

In a recent study conducted by Okgetheng and Takeuchi ( 2024 ), they assessed the efficacy of Open-Calm language models in grading Japanese essays. The research utilized a dataset of approximately 300 essays, which were annotated by native Japanese educators. The findings of the study demonstrate the considerable potential of Open-Calm language models in automated Japanese essay scoring. Specifically, among the Open-Calm family, the Open-Calm Large model (referred to as OCLL) exhibited the highest performance. However, it is important to note that, as of the current date, the Open-Calm Large model does not offer public access to its server. Consequently, users are required to independently deploy and operate the environment for OCLL. In order to utilize OCLL, users must have a PC equipped with an NVIDIA GeForce RTX 3060 (8 or 12 GB VRAM).

In summary, while the potential of LLMs in automated scoring of nonnative Japanese essays has been demonstrated in two studies—BERT-driven AES (Hirao et al. 2020 ) and OCLL-based AES (Okgetheng and Takeuchi, 2024 )—the number of research efforts in this area remains limited.

Another significant challenge in applying LLMs to AES lies in prompt engineering and ensuring its reliability and effectiveness (Brown et al. 2020 ; Rae et al. 2021 ; Zhang et al. 2021 ). Various prompting strategies have been proposed, such as the zero-shot chain of thought (CoT) approach (Kojima et al. 2022 ), which involves manually crafting diverse and effective examples. However, manual efforts can lead to mistakes. To address this, Zhang et al. ( 2021 ) introduced an automatic CoT prompting method called Auto-CoT, which demonstrates matching or superior performance compared to the CoT paradigm. Another prompt framework is trees of thoughts, enabling a model to self-evaluate its progress at intermediate stages of problem-solving through deliberate reasoning (Yao et al. 2023 ).

Beyond linguistic studies, there has been a noticeable increase in the number of foreign workers in Japan and Japanese learners worldwide (Ministry of Health, Labor, and Welfare of Japan, 2022 ; Japan Foundation, 2021 ). However, existing assessment methods, such as the Japanese Language Proficiency Test (JLPT), J-CAT, and TTBJ Footnote 1 , primarily focus on reading, listening, vocabulary, and grammar skills, neglecting the evaluation of writing proficiency. As the number of workers and language learners continues to grow, there is a rising demand for an efficient AES system that can reduce costs and time for raters and be utilized for employment, examinations, and self-study purposes.

This study aims to explore the potential of LLM-based AES by comparing the effectiveness of five models: two LLMs (GPT Footnote 2 and BERT), one Japanese local LLM (OCLL), and two conventional machine learning-based methods (linguistic feature-based scoring tools - Jess and JWriter).

The research questions addressed in this study are as follows:

To what extent do the LLM-driven AES and linguistic feature-based AES, when used as automated tools to support human rating, accurately reflect test takers’ actual performance?

What influence does the prompt have on the accuracy and performance of LLM-based AES methods?

The subsequent sections of the manuscript cover the methodology, including the assessment measures for nonnative Japanese writing proficiency, criteria for prompts, and the dataset. The evaluation section focuses on the analysis of annotations and rating scores generated by LLM-driven and linguistic feature-based AES methods.

Methodology

The dataset utilized in this study was obtained from the International Corpus of Japanese as a Second Language (I-JAS) Footnote 3 . This corpus consisted of 1000 participants who represented 12 different first languages. For the study, the participants were given a story-writing task on a personal computer. They were required to write two stories based on the 4-panel illustrations titled “Picnic” and “The key” (see Appendix A). Background information for the participants was provided by the corpus, including their Japanese language proficiency levels assessed through two online tests: J-CAT and SPOT. These tests evaluated their reading, listening, vocabulary, and grammar abilities. The learners’ proficiency levels were categorized into six levels aligned with the Common European Framework of Reference for Languages (CEFR) and the Reference Framework for Japanese Language Education (RFJLE): A1, A2, B1, B2, C1, and C2. According to Lee et al. ( 2015 ), there is a high level of agreement (r = 0.86) between the J-CAT and SPOT assessments, indicating that the proficiency certifications provided by J-CAT are consistent with those of SPOT. However, it is important to note that the scores of J-CAT and SPOT do not have a one-to-one correspondence. In this study, the J-CAT scores were used as a benchmark to differentiate learners of different proficiency levels. A total of 1400 essays were utilized, representing the beginner (aligned with A1), A2, B1, B2, C1, and C2 levels based on the J-CAT scores. Table 1 provides information about the learners’ proficiency levels and their corresponding J-CAT and SPOT scores.

A dataset comprising a total of 1400 essays from the story writing tasks was collected. Among these, 714 essays were utilized to evaluate the reliability of the LLM-based AES method, while the remaining 686 essays were designated as development data to assess the LLM-based AES’s capability to distinguish participants with varying proficiency levels. The GPT 4 API was used in this study. A detailed explanation of the prompt-assessment criteria is provided in Section Prompt . All essays were sent to the model for measurement and scoring.

Measures of writing proficiency for nonnative Japanese

Japanese exhibits a morphologically agglutinative structure where morphemes are attached to the word stem to convey grammatical functions such as tense, aspect, voice, and honorifics, e.g. (5).

食べ-させ-られ-まし-た-か

tabe-sase-rare-mashi-ta-ka

[eat (stem)-causative-passive voice-honorification-tense. past-question marker]

Japanese employs nine case particles to indicate grammatical functions: the nominative case particle が (ga), the accusative case particle を (o), the genitive case particle の (no), the dative case particle に (ni), the locative/instrumental case particle で (de), the ablative case particle から (kara), the directional case particle へ (e), and the comitative case particle と (to). The agglutinative nature of the language, combined with the case particle system, provides an efficient means of distinguishing between active and passive voice, either through morphemes or case particles, e.g. 食べる taberu “eat concusive . ” (active voice); 食べられる taberareru “eat concusive . ” (passive voice). In the active voice, “パン を 食べる” (pan o taberu) translates to “to eat bread”. On the other hand, in the passive voice, it becomes “パン が 食べられた” (pan ga taberareta), which means “(the) bread was eaten”. Additionally, it is important to note that different conjugations of the same lemma are considered as one type in order to ensure a comprehensive assessment of the language features. For example, e.g., 食べる taberu “eat concusive . ”; 食べている tabeteiru “eat progress .”; 食べた tabeta “eat past . ” as one type.

To incorporate these features, previous research (Suzuki, 1999 ; Watanabe et al. 1988 ; Ishioka, 2001 ; Ishioka and Kameda, 2006 ; Hirao et al. 2020 ) has identified complexity, fluency, and accuracy as crucial factors for evaluating writing quality. These criteria are assessed through various aspects, including lexical richness (lexical density, diversity, and sophistication), syntactic complexity, and cohesion (Kyle et al. 2021 ; Mizumoto and Eguchi, 2023 ; Ure, 1971 ; Halliday, 1985 ; Barkaoui and Hadidi, 2020 ; Zenker and Kyle, 2021 ; Kim et al. 2018 ; Lu, 2017 ; Ortega, 2015 ). Therefore, this study proposes five scoring categories: lexical richness, syntactic complexity, cohesion, content elaboration, and grammatical accuracy. A total of 16 measures were employed to capture these categories. The calculation process and specific details of these measures can be found in Table 2 .

T-unit, first introduced by Hunt ( 1966 ), is a measure used for evaluating speech and composition. It serves as an indicator of syntactic development and represents the shortest units into which a piece of discourse can be divided without leaving any sentence fragments. In the context of Japanese language assessment, Sakoda and Hosoi ( 2020 ) utilized T-unit as the basic unit to assess the accuracy and complexity of Japanese learners’ speaking and storytelling. The calculation of T-units in Japanese follows the following principles:

A single main clause constitutes 1 T-unit, regardless of the presence or absence of dependent clauses, e.g. (6).

ケンとマリはピクニックに行きました (main clause): 1 T-unit.

If a sentence contains a main clause along with subclauses, each subclause is considered part of the same T-unit, e.g. (7).

天気が良かった の で (subclause)、ケンとマリはピクニックに行きました (main clause): 1 T-unit.

In the case of coordinate clauses, where multiple clauses are connected, each coordinated clause is counted separately. Thus, a sentence with coordinate clauses may have 2 T-units or more, e.g. (8).

ケンは地図で場所を探して (coordinate clause)、マリはサンドイッチを作りました (coordinate clause): 2 T-units.

Lexical diversity refers to the range of words used within a text (Engber, 1995 ; Kyle et al. 2021 ) and is considered a useful measure of the breadth of vocabulary in L n production (Jarvis, 2013a , 2013b ).

The type/token ratio (TTR) is widely recognized as a straightforward measure for calculating lexical diversity and has been employed in numerous studies. These studies have demonstrated a strong correlation between TTR and other methods of measuring lexical diversity (e.g., Bentz et al. 2016 ; Čech and Miroslav, 2018 ; Çöltekin and Taraka, 2018 ). TTR is computed by considering both the number of unique words (types) and the total number of words (tokens) in a given text. Given that the length of learners’ writing texts can vary, this study employs the moving average type-token ratio (MATTR) to mitigate the influence of text length. MATTR is calculated using a 50-word moving window. Initially, a TTR is determined for words 1–50 in an essay, followed by words 2–51, 3–52, and so on until the end of the essay is reached (Díez-Ortega and Kyle, 2023 ). The final MATTR scores were obtained by averaging the TTR scores for all 50-word windows. The following formula was employed to derive MATTR:

\({\rm{MATTR}}({\rm{W}})=\frac{{\sum }_{{\rm{i}}=1}^{{\rm{N}}-{\rm{W}}+1}{{\rm{F}}}_{{\rm{i}}}}{{\rm{W}}({\rm{N}}-{\rm{W}}+1)}\)

Here, N refers to the number of tokens in the corpus. W is the randomly selected token size (W < N). \({F}_{i}\) is the number of types in each window. The \({\rm{MATTR}}({\rm{W}})\) is the mean of a series of type-token ratios (TTRs) based on the word form for all windows. It is expected that individuals with higher language proficiency will produce texts with greater lexical diversity, as indicated by higher MATTR scores.

Lexical density was captured by the ratio of the number of lexical words to the total number of words (Lu, 2012 ). Lexical sophistication refers to the utilization of advanced vocabulary, often evaluated through word frequency indices (Crossley et al. 2013 ; Haberman, 2008 ; Kyle and Crossley, 2015 ; Laufer and Nation, 1995 ; Lu, 2012 ; Read, 2000 ). In line of writing, lexical sophistication can be interpreted as vocabulary breadth, which entails the appropriate usage of vocabulary items across various lexicon-grammatical contexts and registers (Garner et al. 2019 ; Kim et al. 2018 ; Kyle et al. 2018 ). In Japanese specifically, words are considered lexically sophisticated if they are not included in the “Japanese Education Vocabulary List Ver 1.0”. Footnote 4 Consequently, lexical sophistication was calculated by determining the number of sophisticated word types relative to the total number of words per essay. Furthermore, it has been suggested that, in Japanese writing, sentences should ideally have a length of no more than 40 to 50 characters, as this promotes readability. Therefore, the median and maximum sentence length can be considered as useful indices for assessment (Ishioka and Kameda, 2006 ).

Syntactic complexity was assessed based on several measures, including the mean length of clauses, verb phrases per T-unit, clauses per T-unit, dependent clauses per T-unit, complex nominals per clause, adverbial clauses per clause, coordinate phrases per clause, and mean dependency distance (MDD). The MDD reflects the distance between the governor and dependent positions in a sentence. A larger dependency distance indicates a higher cognitive load and greater complexity in syntactic processing (Liu, 2008 ; Liu et al. 2017 ). The MDD has been established as an efficient metric for measuring syntactic complexity (Jiang, Quyang, and Liu, 2019 ; Li and Yan, 2021 ). To calculate the MDD, the position numbers of the governor and dependent are subtracted, assuming that words in a sentence are assigned in a linear order, such as W1 … Wi … Wn. In any dependency relationship between words Wa and Wb, Wa is the governor and Wb is the dependent. The MDD of the entire sentence was obtained by taking the absolute value of governor – dependent:

MDD = \(\frac{1}{n}{\sum }_{i=1}^{n}|{\rm{D}}{{\rm{D}}}_{i}|\)

In this formula, \(n\) represents the number of words in the sentence, and \({DD}i\) is the dependency distance of the \({i}^{{th}}\) dependency relationship of a sentence. Building on this, the annotation of sentence ‘Mary-ga-John-ni-keshigomu-o-watashita was [Mary- top -John- dat -eraser- acc -give- past] ’. The sentence’s MDD would be 2. Table 3 provides the CSV file as a prompt for GPT 4.

Cohesion (semantic similarity) and content elaboration aim to capture the ideas presented in test taker’s essays. Cohesion was assessed using three measures: Synonym overlap/paragraph (topic), Synonym overlap/paragraph (keywords), and word2vec cosine similarity. Content elaboration and development were measured as the number of metadiscourse markers (type)/number of words. To capture content closely, this study proposed a novel-distance based representation, by encoding the cosine distance between the essay (by learner) and essay task’s (topic and keyword) i -vectors. The learner’s essay is decoded into a word sequence, and aligned to the essay task’ topic and keyword for log-likelihood measurement. The cosine distance reveals the content elaboration score in the leaners’ essay. The mathematical equation of cosine similarity between target-reference vectors is shown in (11), assuming there are i essays and ( L i , …. L n ) and ( N i , …. N n ) are the vectors representing the learner and task’s topic and keyword respectively. The content elaboration distance between L i and N i was calculated as follows:

\(\cos \left(\theta \right)=\frac{{\rm{L}}\,\cdot\, {\rm{N}}}{\left|{\rm{L}}\right|{\rm{|N|}}}=\frac{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}{N}_{i}}{\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}^{2}}\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{N}_{i}^{2}}}\)

A high similarity value indicates a low difference between the two recognition outcomes, which in turn suggests a high level of proficiency in content elaboration.

To evaluate the effectiveness of the proposed measures in distinguishing different proficiency levels among nonnative Japanese speakers’ writing, we conducted a multi-faceted Rasch measurement analysis (Linacre, 1994 ). This approach applies measurement models to thoroughly analyze various factors that can influence test outcomes, including test takers’ proficiency, item difficulty, and rater severity, among others. The underlying principles and functionality of multi-faceted Rasch measurement are illustrated in (12).

\(\log \left(\frac{{P}_{{nijk}}}{{P}_{{nij}(k-1)}}\right)={B}_{n}-{D}_{i}-{C}_{j}-{F}_{k}\)

(12) defines the logarithmic transformation of the probability ratio ( P nijk /P nij(k-1) )) as a function of multiple parameters. Here, n represents the test taker, i denotes a writing proficiency measure, j corresponds to the human rater, and k represents the proficiency score. The parameter B n signifies the proficiency level of test taker n (where n ranges from 1 to N). D j represents the difficulty parameter of test item i (where i ranges from 1 to L), while C j represents the severity of rater j (where j ranges from 1 to J). Additionally, F k represents the step difficulty for a test taker to move from score ‘k-1’ to k . P nijk refers to the probability of rater j assigning score k to test taker n for test item i . P nij(k-1) represents the likelihood of test taker n being assigned score ‘k-1’ by rater j for test item i . Each facet within the test is treated as an independent parameter and estimated within the same reference framework. To evaluate the consistency of scores obtained through both human and computer analysis, we utilized the Infit mean-square statistic. This statistic is a chi-square measure divided by the degrees of freedom and is weighted with information. It demonstrates higher sensitivity to unexpected patterns in responses to items near a person’s proficiency level (Linacre, 2002 ). Fit statistics are assessed based on predefined thresholds for acceptable fit. For the Infit MNSQ, which has a mean of 1.00, different thresholds have been suggested. Some propose stricter thresholds ranging from 0.7 to 1.3 (Bond et al. 2021 ), while others suggest more lenient thresholds ranging from 0.5 to 1.5 (Eckes, 2009 ). In this study, we adopted the criterion of 0.70–1.30 for the Infit MNSQ.

Moving forward, we can now proceed to assess the effectiveness of the 16 proposed measures based on five criteria for accurately distinguishing various levels of writing proficiency among non-native Japanese speakers. To conduct this evaluation, we utilized the development dataset from the I-JAS corpus, as described in Section Dataset . Table 4 provides a measurement report that presents the performance details of the 14 metrics under consideration. The measure separation was found to be 4.02, indicating a clear differentiation among the measures. The reliability index for the measure separation was 0.891, suggesting consistency in the measurement. Similarly, the person separation reliability index was 0.802, indicating the accuracy of the assessment in distinguishing between individuals. All 16 measures demonstrated Infit mean squares within a reasonable range, ranging from 0.76 to 1.28. The Synonym overlap/paragraph (topic) measure exhibited a relatively high outfit mean square of 1.46, although the Infit mean square falls within an acceptable range. The standard error for the measures ranged from 0.13 to 0.28, indicating the precision of the estimates.

Table 5 further illustrated the weights assigned to different linguistic measures for score prediction, with higher weights indicating stronger correlations between those measures and higher scores. Specifically, the following measures exhibited higher weights compared to others: moving average type token ratio per essay has a weight of 0.0391. Mean dependency distance had a weight of 0.0388. Mean length of clause, calculated by dividing the number of words by the number of clauses, had a weight of 0.0374. Complex nominals per T-unit, calculated by dividing the number of complex nominals by the number of T-units, had a weight of 0.0379. Coordinate phrases rate, calculated by dividing the number of coordinate phrases by the number of clauses, had a weight of 0.0325. Grammatical error rate, representing the number of errors per essay, had a weight of 0.0322.

Criteria (output indicator)

The criteria used to evaluate the writing ability in this study were based on CEFR, which follows a six-point scale ranging from A1 to C2. To assess the quality of Japanese writing, the scoring criteria from Table 6 were utilized. These criteria were derived from the IELTS writing standards and served as assessment guidelines and prompts for the written output.

A prompt is a question or detailed instruction that is provided to the model to obtain a proper response. After several pilot experiments, we decided to provide the measures (Section Measures of writing proficiency for nonnative Japanese ) as the input prompt and use the criteria (Section Criteria (output indicator) ) as the output indicator. Regarding the prompt language, considering that the LLM was tasked with rating Japanese essays, would prompt in Japanese works better Footnote 5 ? We conducted experiments comparing the performance of GPT-4 using both English and Japanese prompts. Additionally, we utilized the Japanese local model OCLL with Japanese prompts. Multiple trials were conducted using the same sample. Regardless of the prompt language used, we consistently obtained the same grading results with GPT-4, which assigned a grade of B1 to the writing sample. This suggested that GPT-4 is reliable and capable of producing consistent ratings regardless of the prompt language. On the other hand, when we used Japanese prompts with the Japanese local model “OCLL”, we encountered inconsistent grading results. Out of 10 attempts with OCLL, only 6 yielded consistent grading results (B1), while the remaining 4 showed different outcomes, including A1 and B2 grades. These findings indicated that the language of the prompt was not the determining factor for reliable AES. Instead, the size of the training data and the model parameters played crucial roles in achieving consistent and reliable AES results for the language model.

The following is the utilized prompt, which details all measures and requires the LLM to score the essays using holistic and trait scores.

Please evaluate Japanese essays written by Japanese learners and assign a score to each essay on a six-point scale, ranging from A1, A2, B1, B2, C1 to C2. Additionally, please provide trait scores and display the calculation process for each trait score. The scoring should be based on the following criteria:

Moving average type-token ratio.

Number of lexical words (token) divided by the total number of words per essay.

Number of sophisticated word types divided by the total number of words per essay.

Mean length of clause.

Verb phrases per T-unit.

Clauses per T-unit.

Dependent clauses per T-unit.

Complex nominals per clause.

Adverbial clauses per clause.

Coordinate phrases per clause.

Mean dependency distance.

Synonym overlap paragraph (topic and keywords).

Word2vec cosine similarity.

Connectives per essay.

Conjunctions per essay.

Number of metadiscourse markers (types) divided by the total number of words.

Number of errors per essay.

Japanese essay text

出かける前に二人が地図を見ている間に、サンドイッチを入れたバスケットに犬が入ってしまいました。それに気づかずに二人は楽しそうに出かけて行きました。やがて突然犬がバスケットから飛び出し、二人は驚きました。バスケット の 中を見ると、食べ物はすべて犬に食べられていて、二人は困ってしまいました。(ID_JJJ01_SW1)

The score of the example above was B1. Figure 3 provides an example of holistic and trait scores provided by GPT-4 (with a prompt indicating all measures) via Bing Footnote 6 .

figure 3

Example of GPT-4 AES and feedback (with a prompt indicating all measures).

Statistical analysis

The aim of this study is to investigate the potential use of LLM for nonnative Japanese AES. It seeks to compare the scoring outcomes obtained from feature-based AES tools, which rely on conventional machine learning technology (i.e. Jess, JWriter), with those generated by AI-driven AES tools utilizing deep learning technology (BERT, GPT, OCLL). To assess the reliability of a computer-assisted annotation tool, the study initially established human-human agreement as the benchmark measure. Subsequently, the performance of the LLM-based method was evaluated by comparing it to human-human agreement.

To assess annotation agreement, the study employed standard measures such as precision, recall, and F-score (Brants 2000 ; Lu 2010 ), along with the quadratically weighted kappa (QWK) to evaluate the consistency and agreement in the annotation process. Assume A and B represent human annotators. When comparing the annotations of the two annotators, the following results are obtained. The evaluation of precision, recall, and F-score metrics was illustrated in equations (13) to (15).

\({\rm{Recall}}(A,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,A}\)

\({\rm{Precision}}(A,\,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,B}\)

The F-score is the harmonic mean of recall and precision:

\({\rm{F}}-{\rm{score}}=\frac{2* ({\rm{Precision}}* {\rm{Recall}})}{{\rm{Precision}}+{\rm{Recall}}}\)

The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.

In accordance with Taghipour and Ng ( 2016 ), the calculation of QWK involves two steps:

Step 1: Construct a weight matrix W as follows:

\({W}_{{ij}}=\frac{{(i-j)}^{2}}{{(N-1)}^{2}}\)

i represents the annotation made by the tool, while j represents the annotation made by a human rater. N denotes the total number of possible annotations. Matrix O is subsequently computed, where O_( i, j ) represents the count of data annotated by the tool ( i ) and the human annotator ( j ). On the other hand, E refers to the expected count matrix, which undergoes normalization to ensure that the sum of elements in E matches the sum of elements in O.

Step 2: With matrices O and E, the QWK is obtained as follows:

K = 1- \(\frac{\sum i,j{W}_{i,j}\,{O}_{i,j}}{\sum i,j{W}_{i,j}\,{E}_{i,j}}\)

The value of the quadratic weighted kappa increases as the level of agreement improves. Further, to assess the accuracy of LLM scoring, the proportional reductive mean square error (PRMSE) was employed. The PRMSE approach takes into account the variability observed in human ratings to estimate the rater error, which is then subtracted from the variance of the human labels. This calculation provides an overall measure of agreement between the automated scores and true scores (Haberman et al. 2015 ; Loukina et al. 2020 ; Taghipour and Ng, 2016 ). The computation of PRMSE involves the following steps:

Step 1: Calculate the mean squared errors (MSEs) for the scoring outcomes of the computer-assisted tool (MSE tool) and the human scoring outcomes (MSE human).

Step 2: Determine the PRMSE by comparing the MSE of the computer-assisted tool (MSE tool) with the MSE from human raters (MSE human), using the following formula:

\({\rm{PRMSE}}=1-\frac{({\rm{MSE}}\,{\rm{tool}})\,}{({\rm{MSE}}\,{\rm{human}})\,}=1-\,\frac{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-{\hat{{\rm{y}}}}_{{\rm{i}}})}^{2}}{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-\hat{{\rm{y}}})}^{2}}\)

In the numerator, ŷi represents the scoring outcome predicted by a specific LLM-driven AES system for a given sample. The term y i − ŷ i represents the difference between this predicted outcome and the mean value of all LLM-driven AES systems’ scoring outcomes. It quantifies the deviation of the specific LLM-driven AES system’s prediction from the average prediction of all LLM-driven AES systems. In the denominator, y i − ŷ represents the difference between the scoring outcome provided by a specific human rater for a given sample and the mean value of all human raters’ scoring outcomes. It measures the discrepancy between the specific human rater’s score and the average score given by all human raters. The PRMSE is then calculated by subtracting the ratio of the MSE tool to the MSE human from 1. PRMSE falls within the range of 0 to 1, with larger values indicating reduced errors in LLM’s scoring compared to those of human raters. In other words, a higher PRMSE implies that LLM’s scoring demonstrates greater accuracy in predicting the true scores (Loukina et al. 2020 ). The interpretation of kappa values, ranging from 0 to 1, is based on the work of Landis and Koch ( 1977 ). Specifically, the following categories are assigned to different ranges of kappa values: −1 indicates complete inconsistency, 0 indicates random agreement, 0.0 ~ 0.20 indicates extremely low level of agreement (slight), 0.21 ~ 0.40 indicates moderate level of agreement (fair), 0.41 ~ 0.60 indicates medium level of agreement (moderate), 0.61 ~ 0.80 indicates high level of agreement (substantial), 0.81 ~ 1 indicates almost perfect level of agreement. All statistical analyses were executed using Python script.

Results and discussion

Annotation reliability of the llm.

This section focuses on assessing the reliability of the LLM’s annotation and scoring capabilities. To evaluate the reliability, several tests were conducted simultaneously, aiming to achieve the following objectives:

Assess the LLM’s ability to differentiate between test takers with varying levels of oral proficiency.

Determine the level of agreement between the annotations and scoring performed by the LLM and those done by human raters.

The evaluation of the results encompassed several metrics, including: precision, recall, F-Score, quadratically-weighted kappa, proportional reduction of mean squared error, Pearson correlation, and multi-faceted Rasch measurement.

Inter-annotator agreement (human–human annotator agreement)

We started with an agreement test of the two human annotators. Two trained annotators were recruited to determine the writing task data measures. A total of 714 scripts, as the test data, was utilized. Each analysis lasted 300–360 min. Inter-annotator agreement was evaluated using the standard measures of precision, recall, and F-score and QWK. Table 7 presents the inter-annotator agreement for the various indicators. As shown, the inter-annotator agreement was fairly high, with F-scores ranging from 1.0 for sentence and word number to 0.666 for grammatical errors.

The findings from the QWK analysis provided further confirmation of the inter-annotator agreement. The QWK values covered a range from 0.950 ( p  = 0.000) for sentence and word number to 0.695 for synonym overlap number (keyword) and grammatical errors ( p  = 0.001).

Agreement of annotation outcomes between human and LLM

To evaluate the consistency between human annotators and LLM annotators (BERT, GPT, OCLL) across the indices, the same test was conducted. The results of the inter-annotator agreement (F-score) between LLM and human annotation are provided in Appendix B-D. The F-scores ranged from 0.706 for Grammatical error # for OCLL-human to a perfect 1.000 for GPT-human, for sentences, clauses, T-units, and words. These findings were further supported by the QWK analysis, which showed agreement levels ranging from 0.807 ( p  = 0.001) for metadiscourse markers for OCLL-human to 0.962 for words ( p  = 0.000) for GPT-human. The findings demonstrated that the LLM annotation achieved a significant level of accuracy in identifying measurement units and counts.

Reliability of LLM-driven AES’s scoring and discriminating proficiency levels

This section examines the reliability of the LLM-driven AES scoring through a comparison of the scoring outcomes produced by human raters and the LLM ( Reliability of LLM-driven AES scoring ). It also assesses the effectiveness of the LLM-based AES system in differentiating participants with varying proficiency levels ( Reliability of LLM-driven AES discriminating proficiency levels ).

Reliability of LLM-driven AES scoring

Table 8 summarizes the QWK coefficient analysis between the scores computed by the human raters and the GPT-4 for the individual essays from I-JAS Footnote 7 . As shown, the QWK of all measures ranged from k  = 0.819 for lexical density (number of lexical words (tokens)/number of words per essay) to k  = 0.644 for word2vec cosine similarity. Table 9 further presents the Pearson correlations between the 16 writing proficiency measures scored by human raters and GPT 4 for the individual essays. The correlations ranged from 0.672 for syntactic complexity to 0.734 for grammatical accuracy. The correlations between the writing proficiency scores assigned by human raters and the BERT-based AES system were found to range from 0.661 for syntactic complexity to 0.713 for grammatical accuracy. The correlations between the writing proficiency scores given by human raters and the OCLL-based AES system ranged from 0.654 for cohesion to 0.721 for grammatical accuracy. These findings indicated an alignment between the assessments made by human raters and both the BERT-based and OCLL-based AES systems in terms of various aspects of writing proficiency.

Reliability of LLM-driven AES discriminating proficiency levels

After validating the reliability of the LLM’s annotation and scoring, the subsequent objective was to evaluate its ability to distinguish between various proficiency levels. For this analysis, a dataset of 686 individual essays was utilized. Table 10 presents a sample of the results, summarizing the means, standard deviations, and the outcomes of the one-way ANOVAs based on the measures assessed by the GPT-4 model. A post hoc multiple comparison test, specifically the Bonferroni test, was conducted to identify any potential differences between pairs of levels.

As the results reveal, seven measures presented linear upward or downward progress across the three proficiency levels. These were marked in bold in Table 10 and comprise one measure of lexical richness, i.e. MATTR (lexical diversity); four measures of syntactic complexity, i.e. MDD (mean dependency distance), MLC (mean length of clause), CNT (complex nominals per T-unit), CPC (coordinate phrases rate); one cohesion measure, i.e. word2vec cosine similarity and GER (grammatical error rate). Regarding the ability of the sixteen measures to distinguish adjacent proficiency levels, the Bonferroni tests indicated that statistically significant differences exist between the primary level and the intermediate level for MLC and GER. One measure of lexical richness, namely LD, along with three measures of syntactic complexity (VPT, CT, DCT, ACC), two measures of cohesion (SOPT, SOPK), and one measure of content elaboration (IMM), exhibited statistically significant differences between proficiency levels. However, these differences did not demonstrate a linear progression between adjacent proficiency levels. No significant difference was observed in lexical sophistication between proficiency levels.

To summarize, our study aimed to evaluate the reliability and differentiation capabilities of the LLM-driven AES method. For the first objective, we assessed the LLM’s ability to differentiate between test takers with varying levels of oral proficiency using precision, recall, F-Score, and quadratically-weighted kappa. Regarding the second objective, we compared the scoring outcomes generated by human raters and the LLM to determine the level of agreement. We employed quadratically-weighted kappa and Pearson correlations to compare the 16 writing proficiency measures for the individual essays. The results confirmed the feasibility of using the LLM for annotation and scoring in AES for nonnative Japanese. As a result, Research Question 1 has been addressed.

Comparison of BERT-, GPT-, OCLL-based AES, and linguistic-feature-based computation methods

This section aims to compare the effectiveness of five AES methods for nonnative Japanese writing, i.e. LLM-driven approaches utilizing BERT, GPT, and OCLL, linguistic feature-based approaches using Jess and JWriter. The comparison was conducted by comparing the ratings obtained from each approach with human ratings. All ratings were derived from the dataset introduced in Dataset . To facilitate the comparison, the agreement between the automated methods and human ratings was assessed using QWK and PRMSE. The performance of each approach was summarized in Table 11 .

The QWK coefficient values indicate that LLMs (GPT, BERT, OCLL) and human rating outcomes demonstrated higher agreement compared to feature-based AES methods (Jess and JWriter) in assessing writing proficiency criteria, including lexical richness, syntactic complexity, content, and grammatical accuracy. Among the LLMs, the GPT-4 driven AES and human rating outcomes showed the highest agreement in all criteria, except for syntactic complexity. The PRMSE values suggest that the GPT-based method outperformed linguistic feature-based methods and other LLM-based approaches. Moreover, an interesting finding emerged during the study: the agreement coefficient between GPT-4 and human scoring was even higher than the agreement between different human raters themselves. This discovery highlights the advantage of GPT-based AES over human rating. Ratings involve a series of processes, including reading the learners’ writing, evaluating the content and language, and assigning scores. Within this chain of processes, various biases can be introduced, stemming from factors such as rater biases, test design, and rating scales. These biases can impact the consistency and objectivity of human ratings. GPT-based AES may benefit from its ability to apply consistent and objective evaluation criteria. By prompting the GPT model with detailed writing scoring rubrics and linguistic features, potential biases in human ratings can be mitigated. The model follows a predefined set of guidelines and does not possess the same subjective biases that human raters may exhibit. This standardization in the evaluation process contributes to the higher agreement observed between GPT-4 and human scoring. Section Prompt strategy of the study delves further into the role of prompts in the application of LLMs to AES. It explores how the choice and implementation of prompts can impact the performance and reliability of LLM-based AES methods. Furthermore, it is important to acknowledge the strengths of the local model, i.e. the Japanese local model OCLL, which excels in processing certain idiomatic expressions. Nevertheless, our analysis indicated that GPT-4 surpasses local models in AES. This superior performance can be attributed to the larger parameter size of GPT-4, estimated to be between 500 billion and 1 trillion, which exceeds the sizes of both BERT and the local model OCLL.

Prompt strategy

In the context of prompt strategy, Mizumoto and Eguchi ( 2023 ) conducted a study where they applied the GPT-3 model to automatically score English essays in the TOEFL test. They found that the accuracy of the GPT model alone was moderate to fair. However, when they incorporated linguistic measures such as cohesion, syntactic complexity, and lexical features alongside the GPT model, the accuracy significantly improved. This highlights the importance of prompt engineering and providing the model with specific instructions to enhance its performance. In this study, a similar approach was taken to optimize the performance of LLMs. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. Model 1 was used as the baseline, representing GPT-4 without any additional prompting. Model 2, on the other hand, involved GPT-4 prompted with 16 measures that included scoring criteria, efficient linguistic features for writing assessment, and detailed measurement units and calculation formulas. The remaining models (Models 3 to 18) utilized GPT-4 prompted with individual measures. The performance of these 18 different models was assessed using the output indicators described in Section Criteria (output indicator) . By comparing the performances of these models, the study aimed to understand the impact of prompt engineering on the accuracy and effectiveness of GPT-4 in AES tasks.

Based on the PRMSE scores presented in Fig. 4 , it was observed that Model 1, representing GPT-4 without any additional prompting, achieved a fair level of performance. However, Model 2, which utilized GPT-4 prompted with all measures, outperformed all other models in terms of PRMSE score, achieving a score of 0.681. These results indicate that the inclusion of specific measures and prompts significantly enhanced the performance of GPT-4 in AES. Among the measures, syntactic complexity was found to play a particularly significant role in improving the accuracy of GPT-4 in assessing writing quality. Following that, lexical diversity emerged as another important factor contributing to the model’s effectiveness. The study suggests that a well-prompted GPT-4 can serve as a valuable tool to support human assessors in evaluating writing quality. By utilizing GPT-4 as an automated scoring tool, the evaluation biases associated with human raters can be minimized. This has the potential to empower teachers by allowing them to focus on designing writing tasks and guiding writing strategies, while leveraging the capabilities of GPT-4 for efficient and reliable scoring.

figure 4

PRMSE scores of the 18 AES models.

This study aimed to investigate two main research questions: the feasibility of utilizing LLMs for AES and the impact of prompt engineering on the application of LLMs in AES.

To address the first objective, the study compared the effectiveness of five different models: GPT, BERT, the Japanese local LLM (OCLL), and two conventional machine learning-based AES tools (Jess and JWriter). The PRMSE values indicated that the GPT-4-based method outperformed other LLMs (BERT, OCLL) and linguistic feature-based computational methods (Jess and JWriter) across various writing proficiency criteria. Furthermore, the agreement coefficient between GPT-4 and human scoring surpassed the agreement among human raters themselves, highlighting the potential of using the GPT-4 tool to enhance AES by reducing biases and subjectivity, saving time, labor, and cost, and providing valuable feedback for self-study. Regarding the second goal, the role of prompt design was investigated by comparing 18 models, including a baseline model, a model prompted with all measures, and 16 models prompted with one measure at a time. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. The PRMSE scores of the models showed that GPT-4 prompted with all measures achieved the best performance, surpassing the baseline and other models.

In conclusion, this study has demonstrated the potential of LLMs in supporting human rating in assessments. By incorporating automation, we can save time and resources while reducing biases and subjectivity inherent in human rating processes. Automated language assessments offer the advantage of accessibility, providing equal opportunities and economic feasibility for individuals who lack access to traditional assessment centers or necessary resources. LLM-based language assessments provide valuable feedback and support to learners, aiding in the enhancement of their language proficiency and the achievement of their goals. This personalized feedback can cater to individual learner needs, facilitating a more tailored and effective language-learning experience.

There are three important areas that merit further exploration. First, prompt engineering requires attention to ensure optimal performance of LLM-based AES across different language types. This study revealed that GPT-4, when prompted with all measures, outperformed models prompted with fewer measures. Therefore, investigating and refining prompt strategies can enhance the effectiveness of LLMs in automated language assessments. Second, it is crucial to explore the application of LLMs in second-language assessment and learning for oral proficiency, as well as their potential in under-resourced languages. Recent advancements in self-supervised machine learning techniques have significantly improved automatic speech recognition (ASR) systems, opening up new possibilities for creating reliable ASR systems, particularly for under-resourced languages with limited data. However, challenges persist in the field of ASR. First, ASR assumes correct word pronunciation for automatic pronunciation evaluation, which proves challenging for learners in the early stages of language acquisition due to diverse accents influenced by their native languages. Accurately segmenting short words becomes problematic in such cases. Second, developing precise audio-text transcriptions for languages with non-native accented speech poses a formidable task. Last, assessing oral proficiency levels involves capturing various linguistic features, including fluency, pronunciation, accuracy, and complexity, which are not easily captured by current NLP technology.

Data availability

The dataset utilized was obtained from the International Corpus of Japanese as a Second Language (I-JAS). The data URLs: [ https://www2.ninjal.ac.jp/jll/lsaj/ihome2.html ].

J-CAT and TTBJ are two computerized adaptive tests used to assess Japanese language proficiency.

SPOT is a specific component of the TTBJ test.

J-CAT: https://www.j-cat2.org/html/ja/pages/interpret.html

SPOT: https://ttbj.cegloc.tsukuba.ac.jp/p1.html#SPOT .

The study utilized a prompt-based GPT-4 model, developed by OpenAI, which has an impressive architecture with 1.8 trillion parameters across 120 layers. GPT-4 was trained on a vast dataset of 13 trillion tokens, using two stages: initial training on internet text datasets to predict the next token, and subsequent fine-tuning through reinforcement learning from human feedback.

https://www2.ninjal.ac.jp/jll/lsaj/ihome2-en.html .

http://jhlee.sakura.ne.jp/JEV/ by Japanese Learning Dictionary Support Group 2015.

We express our sincere gratitude to the reviewer for bringing this matter to our attention.

On February 7, 2023, Microsoft began rolling out a major overhaul to Bing that included a new chatbot feature based on OpenAI’s GPT-4 (Bing.com).

Appendix E-F present the analysis results of the QWK coefficient between the scores computed by the human raters and the BERT, OCLL models.

Attali Y, Burstein J (2006) Automated essay scoring with e-rater® V.2. J. Technol., Learn. Assess., 4

Barkaoui K, Hadidi A (2020) Assessing Change in English Second Language Writing Performance (1st ed.). Routledge, New York. https://doi.org/10.4324/9781003092346

Bentz C, Tatyana R, Koplenig A, Tanja S (2016) A comparison between morphological complexity. measures: Typological data vs. language corpora. In Proceedings of the workshop on computational linguistics for linguistic complexity (CL4LC), 142–153. Osaka, Japan: The COLING 2016 Organizing Committee

Bond TG, Yan Z, Heene M (2021) Applying the Rasch model: Fundamental measurement in the human sciences (4th ed). Routledge

Brants T (2000) Inter-annotator agreement for a German newspaper corpus. Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00), Athens, Greece, 31 May-2 June, European Language Resources Association

Brown TB, Mann B, Ryder N, et al. (2020) Language models are few-shot learners. Advances in Neural Information Processing Systems, Online, 6–12 December, Curran Associates, Inc., Red Hook, NY

Burstein J (2003) The E-rater scoring engine: Automated essay scoring with natural language processing. In Shermis MD and Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Čech R, Miroslav K (2018) Morphological richness of text. In Masako F, Václav C (ed) Taming the corpus: From inflection and lexis to interpretation, 63–77. Cham, Switzerland: Springer Nature

Çöltekin Ç, Taraka, R (2018) Exploiting Universal Dependencies treebanks for measuring morphosyntactic complexity. In Aleksandrs B, Christian B (ed), Proceedings of first workshop on measuring language complexity, 1–7. Torun, Poland

Crossley SA, Cobb T, McNamara DS (2013) Comparing count-based and band-based indices of word frequency: Implications for active vocabulary research and pedagogical applications. System 41:965–981. https://doi.org/10.1016/j.system.2013.08.002

Article   Google Scholar  

Crossley SA, McNamara DS (2016) Say more and be more coherent: How text elaboration and cohesion can increase writing quality. J. Writ. Res. 7:351–370

CyberAgent Inc (2023) Open-Calm series of Japanese language models. Retrieved from: https://www.cyberagent.co.jp/news/detail/id=28817

Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis, Minnesota, 2–7 June, pp. 4171–4186. Association for Computational Linguistics

Diez-Ortega M, Kyle K (2023) Measuring the development of lexical richness of L2 Spanish: a longitudinal learner corpus study. Studies in Second Language Acquisition 1-31

Eckes T (2009) On common ground? How raters perceive scoring criteria in oral proficiency testing. In Brown A, Hill K (ed) Language testing and evaluation 13: Tasks and criteria in performance assessment (pp. 43–73). Peter Lang Publishing

Elliot S (2003) IntelliMetric: from here to validity. In: Shermis MD, Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Google Scholar  

Engber CA (1995) The relationship of lexical proficiency to the quality of ESL compositions. J. Second Lang. Writ. 4:139–155

Garner J, Crossley SA, Kyle K (2019) N-gram measures and L2 writing proficiency. System 80:176–187. https://doi.org/10.1016/j.system.2018.12.001

Haberman SJ (2008) When can subscores have value? J. Educat. Behav. Stat., 33:204–229

Haberman SJ, Yao L, Sinharay S (2015) Prediction of true test scores from observed item scores and ancillary data. Brit. J. Math. Stat. Psychol. 68:363–385

Halliday MAK (1985) Spoken and Written Language. Deakin University Press, Melbourne, Australia

Hirao R, Arai M, Shimanaka H et al. (2020) Automated essay scoring system for nonnative Japanese learners. Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pp. 1250–1257. European Language Resources Association

Hunt KW (1966) Recent Measures in Syntactic Development. Elementary English, 43(7), 732–739. http://www.jstor.org/stable/41386067

Ishioka T (2001) About e-rater, a computer-based automatic scoring system for essays [Konpyūta ni yoru essei no jidō saiten shisutemu e − rater ni tsuite]. University Entrance Examination. Forum [Daigaku nyūshi fōramu] 24:71–76

Hochreiter S, Schmidhuber J (1997) Long short- term memory. Neural Comput. 9(8):1735–1780

Article   CAS   PubMed   Google Scholar  

Ishioka T, Kameda M (2006) Automated Japanese essay scoring system based on articles written by experts. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17–18 July 2006, pp. 233-240. Association for Computational Linguistics, USA

Japan Foundation (2021) Retrieved from: https://www.jpf.gp.jp/j/project/japanese/survey/result/dl/survey2021/all.pdf

Jarvis S (2013a) Defining and measuring lexical diversity. In Jarvis S, Daller M (ed) Vocabulary knowledge: Human ratings and automated measures (Vol. 47, pp. 13–44). John Benjamins. https://doi.org/10.1075/sibil.47.03ch1

Jarvis S (2013b) Capturing the diversity in lexical diversity. Lang. Learn. 63:87–106. https://doi.org/10.1111/j.1467-9922.2012.00739.x

Jiang J, Quyang J, Liu H (2019) Interlanguage: A perspective of quantitative linguistic typology. Lang. Sci. 74:85–97

Kim M, Crossley SA, Kyle K (2018) Lexical sophistication as a multidimensional phenomenon: Relations to second language lexical proficiency, development, and writing quality. Mod. Lang. J. 102(1):120–141. https://doi.org/10.1111/modl.12447

Kojima T, Gu S, Reid M et al. (2022) Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems, New Orleans, LA, 29 November-1 December, Curran Associates, Inc., Red Hook, NY

Kyle K, Crossley SA (2015) Automatically assessing lexical sophistication: Indices, tools, findings, and application. TESOL Q 49:757–786

Kyle K, Crossley SA, Berger CM (2018) The tool for the automatic analysis of lexical sophistication (TAALES): Version 2.0. Behav. Res. Methods 50:1030–1046. https://doi.org/10.3758/s13428-017-0924-4

Article   PubMed   Google Scholar  

Kyle K, Crossley SA, Jarvis S (2021) Assessing the validity of lexical diversity using direct judgements. Lang. Assess. Q. 18:154–170. https://doi.org/10.1080/15434303.2020.1844205

Landauer TK, Laham D, Foltz PW (2003) Automated essay scoring and annotation of essays with the Intelligent Essay Assessor. In Shermis MD, Burstein JC (ed), Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 159–174

Laufer B, Nation P (1995) Vocabulary size and use: Lexical richness in L2 written production. Appl. Linguist. 16:307–322. https://doi.org/10.1093/applin/16.3.307

Lee J, Hasebe Y (2017) jWriter Learner Text Evaluator, URL: https://jreadability.net/jwriter/

Lee J, Kobayashi N, Sakai T, Sakota K (2015) A Comparison of SPOT and J-CAT Based on Test Analysis [Tesuto bunseki ni motozuku ‘SPOT’ to ‘J-CAT’ no hikaku]. Research on the Acquisition of Second Language Japanese [Dainigengo to shite no nihongo no shūtoku kenkyū] (18) 53–69

Li W, Yan J (2021) Probability distribution of dependency distance based on a Treebank of. Japanese EFL Learners’ Interlanguage. J. Quant. Linguist. 28(2):172–186. https://doi.org/10.1080/09296174.2020.1754611

Article   MathSciNet   Google Scholar  

Linacre JM (2002) Optimizing rating scale category effectiveness. J. Appl. Meas. 3(1):85–106

PubMed   Google Scholar  

Linacre JM (1994) Constructing measurement with a Many-Facet Rasch Model. In Wilson M (ed) Objective measurement: Theory into practice, Volume 2 (pp. 129–144). Norwood, NJ: Ablex

Liu H (2008) Dependency distance as a metric of language comprehension difficulty. J. Cognitive Sci. 9:159–191

Liu H, Xu C, Liang J (2017) Dependency distance: A new perspective on syntactic patterns in natural languages. Phys. Life Rev. 21. https://doi.org/10.1016/j.plrev.2017.03.002

Loukina A, Madnani N, Cahill A, et al. (2020) Using PRMSE to evaluate automated scoring systems in the presence of label noise. Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA → Online, 10 July, pp. 18–29. Association for Computational Linguistics

Lu X (2010) Automatic analysis of syntactic complexity in second language writing. Int. J. Corpus Linguist. 15:474–496

Lu X (2012) The relationship of lexical richness to the quality of ESL learners’ oral narratives. Mod. Lang. J. 96:190–208

Lu X (2017) Automated measurement of syntactic complexity in corpus-based L2 writing research and implications for writing assessment. Lang. Test. 34:493–511

Lu X, Hu R (2022) Sense-aware lexical sophistication indices and their relationship to second language writing quality. Behav. Res. Method. 54:1444–1460. https://doi.org/10.3758/s13428-021-01675-6

Ministry of Health, Labor, and Welfare of Japan (2022) Retrieved from: https://www.mhlw.go.jp/stf/newpage_30367.html

Mizumoto A, Eguchi M (2023) Exploring the potential of using an AI language model for automated essay scoring. Res. Methods Appl. Linguist. 3:100050

Okgetheng B, Takeuchi K (2024) Estimating Japanese Essay Grading Scores with Large Language Models. Proceedings of 30th Annual Conference of the Language Processing Society in Japan, March 2024

Ortega L (2015) Second language learning explained? SLA across 10 contemporary theories. In VanPatten B, Williams J (ed) Theories in Second Language Acquisition: An Introduction

Rae JW, Borgeaud S, Cai T, et al. (2021) Scaling Language Models: Methods, Analysis & Insights from Training Gopher. ArXiv, abs/2112.11446

Read J (2000) Assessing vocabulary. Cambridge University Press. https://doi.org/10.1017/CBO9780511732942

Rudner LM, Liang T (2002) Automated Essay Scoring Using Bayes’ Theorem. J. Technol., Learning and Assessment, 1 (2)

Sakoda K, Hosoi Y (2020) Accuracy and complexity of Japanese Language usage by SLA learners in different learning environments based on the analysis of I-JAS, a learners’ corpus of Japanese as L2. Math. Linguist. 32(7):403–418. https://doi.org/10.24701/mathling.32.7_403

Suzuki N (1999) Summary of survey results regarding comprehensive essay questions. Final report of “Joint Research on Comprehensive Examinations for the Aim of Evaluating Applicability to Each Specialized Field of Universities” for 1996-2000 [shōronbun sōgō mondai ni kansuru chōsa kekka no gaiyō. Heisei 8 - Heisei 12-nendo daigaku no kaku senmon bun’ya e no tekisei no hyōka o mokuteki to suru sōgō shiken no arikata ni kansuru kyōdō kenkyū’ saishū hōkoku-sho]. University Entrance Examination Section Center Research and Development Department [Daigaku nyūshi sentā kenkyū kaihatsubu], 21–32

Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, 1–5 November, pp. 1882–1891. Association for Computational Linguistics

Takeuchi K, Ohno M, Motojin K, Taguchi M, Inada Y, Iizuka M, Abo T, Ueda H (2021) Development of essay scoring methods based on reference texts with construction of research-available Japanese essay data. In IPSJ J 62(9):1586–1604

Ure J (1971) Lexical density: A computational technique and some findings. In Coultard M (ed) Talking about Text. English Language Research, University of Birmingham, Birmingham, England

Vaswani A, Shazeer N, Parmar N, et al. (2017) Attention is all you need. In Advances in Neural Information Processing Systems, Long Beach, CA, 4–7 December, pp. 5998–6008, Curran Associates, Inc., Red Hook, NY

Watanabe H, Taira Y, Inoue Y (1988) Analysis of essay evaluation data [Shōronbun hyōka dēta no kaiseki]. Bulletin of the Faculty of Education, University of Tokyo [Tōkyōdaigaku kyōiku gakubu kiyō], Vol. 28, 143–164

Yao S, Yu D, Zhao J, et al. (2023) Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36

Zenker F, Kyle K (2021) Investigating minimum text lengths for lexical diversity indices. Assess. Writ. 47:100505. https://doi.org/10.1016/j.asw.2020.100505

Zhang Y, Warstadt A, Li X, et al. (2021) When do you need billions of words of pretraining data? Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, pp. 1112-1125. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.90

Download references

This research was funded by National Foundation of Social Sciences (22BYY186) to Wenchao Li.

Author information

Authors and affiliations.

Department of Japanese Studies, Zhejiang University, Hangzhou, China

Department of Linguistics and Applied Linguistics, Zhejiang University, Hangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

Wenchao Li is in charge of conceptualization, validation, formal analysis, investigation, data curation, visualization and writing the draft. Haitao Liu is in charge of supervision.

Corresponding author

Correspondence to Wenchao Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Ethical approval was not required as the study did not involve human participants.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Li, W., Liu, H. Applying large language models for automated essay scoring for non-native Japanese. Humanit Soc Sci Commun 11 , 723 (2024). https://doi.org/10.1057/s41599-024-03209-9

Download citation

Received : 02 February 2024

Accepted : 16 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1057/s41599-024-03209-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

how to score good in essay writing

A man holds a smartphone showing an image of an individual's credit score in the green zone on a continuum from red to green..

How do credit scores work? 2 finance professors explain how lenders choose who gets loans and at what interest rate

how to score good in essay writing

Assistant Professor of Finance, Mississippi State University

how to score good in essay writing

Professor of Finance, Mississippi State University

Disclosure statement

Tom Miller Jr. is affiliated with Consumers' Research, a consumer advocacy organization founded in 1929.

D. Brian Blank does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Mississippi State University provides funding as a member of The Conversation US.

View all partners

With the cost of borrowing money to buy a home or a car inching ever higher, understanding who gets access to credit, and at what interest rate, is more important for borrowers’ financial health than ever. Lenders base those decisions on the borrowers’ credit scores.

To learn more about credit scores, The Conversation consulted with two finance scholars. Brian Blank is an assistant professor of finance at Mississippi State University with expertise related to how firms allocate capital , as well as the role of credit in mortgage lending . His colleague at Mississippi State, Tom Miller Jr. , is a finance professor who has written a book on consumer lending , in addition to providing his expertise to policymakers.

Credit scoring assesses the likelihood of default

Lenders stay in business when borrowers pay back loans.

Some borrowers consistently make prompt payments, while others are slow to repay, and still others default – meaning they do not pay back the money they borrowed. Lenders have a strong business incentive to separate loans that will be paid back from loans that might be paid back.

So how do lenders distinguish between good borrowers and risky ones? They rely on various proprietary credit scoring systems that use past borrower repayment history and other factors to predict the likelihood of future repayment. The three organizations that monitor credit scores in the U.S. are Transunion , Experian and Equifax .

Although 26 million of 258 million credit-eligible Americans lack a credit score , anyone who has ever opened a credit card or other credit account, like a loan, has one. Most people don’t have a credit score before turning 18 , which is usually the age applicants can begin opening credit cards in their own name. However, some people still have no credit later in life if they don’t have any accounts for reporting agencies to assess.

Credit scores simply summarize how well individuals repay debt over time. Based on that repayment behavior, the credit scoring system assigns people a single number ranging from 300 to 850 . A credit score ranging from 670 to 739 is generally considered to be good, a score in the range of 580 to 669 would be judged fair, and a score less than 579 is classified poor, or subprime.

The two most important factors in credit scores are how promptly past debts have been paid and the amount the individual owes on current debt. The score also takes into account the mix and length of credit, in addition to how new it is.

Credit scores can help lenders decide what interest rate to offer consumers. And they can affect banks’ decisions concerning access to mortgages, credit cards and auto loans.

A smiling woman looks at her computer while holding a credit card in her right hand.

Recent improvements in consumer credit scores

Average credit scores in the United States have risen from 688 in 2005 to 716 as of August of 2021 . They stayed steady at that level through 2022 .

While credit card debt is at a record high , the average consumer was using just over a fourth of the revolving credit to which they had access as of September 2022.

As of 2021, nearly half of U.S. consumers had scores considered very good – meaning in the range of 740 to 799 – or excellent (800-850). Six in 10 Americans have a score above 700 , consistent with the general trend of record-setting credit scores of the past few years. These trends might, in part, reflect new programs that are designed to note when individuals pay bills like rent and utilities on time, which can help boost scores .

During the first quarter of 2023, people taking out new mortgages had an average credit score of 765, which is one point lower than a year ago but still higher than the pre-pandemic average of 760.

Credit score evolution from the 1980s to the 2020s

Developed in the late 1950s, the first credit scores – FICO scores – were created to build a computerized, objective measure to help lenders make lending decisions. Before then, bankers relied on commercial credit reporting, the same system merchants used to evaluate the creditworthiness of potential customers based on relationships and subjective evaluation .

The FICO credit scoring system was enhanced over the 1960s and ‘70s, and lenders grew to trust computerized credit evaluation systems. Credit scores really began to exert an influence on American borrowers beginning in the 1980s as FICO become widely used .

A major goal of the credit score is to expand the pool of potential borrowers while minimizing the overall default rate of the pool. In this way, lenders can maximize the number of loans they make. Still, credit scores are imperfect predictors, likely because most credit models assume that consumers will continue to act in the same way in the future as they have in the past. In addition, some believe that various risk factors make credit scores imperfect . Credit modelers, however, continue to make progress by making continuous technological innovations . Even FinTech lenders, which strive to go beyond traditional credit models , heavily rely on credit scores to set their interest rates.

Recently, “Buy Now, Pay Later” accounts have been added to credit scoring, while medical debt has been removed .

Credit scores might seem scary but can be useful

Borrowers with poor or limited credit have challenges building more positive credit histories and good credit scores. This challenge is particularly important because credit scores have become more widely used than ever because of the increasing availability of data and growing precision of credit models.

The availability of additional data results in more precise estimates of credit scoring , which can improve access to credit for consumers who repay bills consistently over time. These so-called “boost programs” factor in other payments that consumers routinely make on a monthly schedule. Think of the number of bills that you auto pay. Boost programs add points to your credit score for the bills that you pay consistently.

You can improve your credit score by making wise decisions

Two of the most important ways to improve credit scores are paying bills on time and ensuring that your credit report accurately reflects your payment history. Simply avoiding default is not enough. Timely payments are necessary. Someone who pays the bills every three months is “caught up” every quarter. But that consumer is 90 days delinquent four times a year. Being 90 days delinquent alarms creditors. So, someone who pays the bills every month will have a higher credit score at the end of the year.

Having more credit accounts can also positively affect your credit score because having these accounts shows that many lenders find you creditworthy. As a result, you might benefit from leaving credit accounts open if you make the wise decision not to access that credit. Warning! You must not use that extra credit access to spend more money and accumulate more debt. That decision is unwise.

Why? Because managing the ratio of debt to income is also critical to a good credit score . Debt-to-income ratios of 36% or less generally indicate individuals who have income to put toward savings, which is what all lenders are looking to see and one of the best ways to improve your credit.

  • Interest rates
  • Personal finance
  • Credit scores
  • Credit reporting
  • Credit card debt
  • Credit cards
  • U.S. consumers

how to score good in essay writing

Head of School, School of Arts & Social Sciences, Monash University Malaysia

how to score good in essay writing

Chief Operating Officer (COO)

how to score good in essay writing

Clinical Teaching Fellow

how to score good in essay writing

Data Manager

how to score good in essay writing

Director, Social Policy

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, what is a good act writing score.

ACT Writing

feature_good

As you study for the ACT, it's easy enough to calculate your ACT composite target score. But where does your essay score fit into all this? What's a good ACT Writing score? Read on to find out how to figure it out!

Feature image credit: Had a Good Boogie Lately? by Jocelyn Kinghorn , used under CC BY-SA 2.0 /Cropped from original.

How Do You Figure Out What's a Good ACT Writing Score?

A good essay score depends on what your goals are. These goals should be concrete and determined by the colleges you're applying to . Find out more about why this is the only factor that truly matters in our article on what a good, bad, and excellent ACT score is .

So how do you figure out your target ACT Writing Score?

Step 0: Is ACT Writing Required?

Especially now that the SAT essay is no longer mandatory, many schools have been reevaluating their stance on whether or not to require the ACT essay (since schools generally like to have a consistent standard across the two tests).

Some colleges are ACT Writing-optional, while others don't consider it at all. Use our complete list of which schools require ACT Writing to figure out where the schools you're applying to stand on the issue.

Step 1: Use Our Worksheet

We've created a handy worksheet to help you figure out your target ACT score . For now we'll be adapting the worksheet to figure out what a good ACT Writing score is for you, although I definitely recommend also filling out a second sheet to figure out your target ACT composite score .

Step 2: Fill In Your Schools

On the worksheet, fill in the names of the schools you want to get into in the leftmost column. Include dream or "reach" schools, but don't include "safety schools" (schools you think you have at least a 90% chance of getting into).

Step 3: Get ACT Writing Score Data

You can try to get the data from each school's admissions website, but this is time-consuming and not always successful, as admission sites aren't laid out in a particularly standardized way.

The best source of data for ACT Writing scores is a school's Common Data Set , if the school chooses to publish it. The Common Data Set, or CDS, is a set of survey items that schools can choose to fill out and put online to give students information about the school in a standardized way. The CDS's section about First-Time, First-Year (Freshman) Admission may include information about students' 25th and 75th percentile scores on ACT Writing.

It's not mandatory for schools to fill out the CDS, and even if they do, they don't have to fill out the information about ACT Writing, so you won't always find that information, but the CDS is the most up-to-date and reliable source for ACT Writing score information.

A third option is to take to Google and search out other sources for this data; however, you should differentiate between this kind of unofficial information and official data released by the schools.

Step 4: Average Both Columns

Total up the 25th and 75th percentile scores, then find the average of each column. We recommended that you use the 75th percentile score as your target ACT Writing score since getting that score will give you a very strong chance of getting into the schools you've listed. If you're applying to humanities programs, you may even want to consider a higher target score for ACT Writing, as it may be used for placement in certain courses.

A quick refresher on what percentile scores mean: 25th percentile means that 25 percent of the students attending have a score at or below that number (below average). The 75th percentile means that 75 percent of students have a score at or below that number (above average).

For example, let's say that the 25th/75th percentile ACT Writing score range for Northwestern University is 7/10. If you score above the 75th percentile score (a 10), your Writing score will help your chances of admission; if you score below the 25th percentile (a 7-8), your Writing score might harm your chances of admission.

What If There's No ACT Writing Score Data?

Unfortunately, very few colleges actually release their ACT Writing score ranges . Rarely is the information easily accessible on the school websites (since admission sites have no standardized formats)—instead, you have to search for a school's most recent Common Data Set or rely on data provided by third parties.

If there is no data for ACT Writing scores at all, you can take a look at the school's composite ACT score ranges to get a rough idea of where your ACT Writing score should be.

Because it requires exceptional skill to get 6 in all domains (or a 12/12) on the ACT Writing, even the most competitive schools will accept a 9/12 on the essay (which puts you in the 96th percentile for ACT Writing ) , even if the school's ACT composite range is 32-35.

We've created a chart below that compares percentiles for ACT composite scores and ACT Writing scores. You can use this chart to help you figure out roughly where your Writing score should fall, based on your composite score.

source: two different ACT.org pages

As an example, Northwestern 's 25/75 range for ACT composite scores is 33-35, so you should aim for an overall ACT Writing score of between 10 and 11 out of 12 .

In general, as long as your Writing score percentile is in the general ballpark (within 20-30 percentile points) of your composite score percentile, you'll be fine.

Summary: How to Decide What a Good ACT Writing Score Is

  • First, look up whether the schools you wish to apply to require ACT Writing scores .
  • Check the school's admissions site or Common Data Set (if published) to see if there is data for ACT Writing, OR
  • Estimate the Writing score range based on the school's ACT composite score range
  • Then, sum up the ACT Writing score ranges for the 25th and 75th percentiles, average the 25th and 75th percentile scores, and choose a target ACT Writing score (75th percentile average score is recommended as a target).

body-bullseye-target-cc0

What's Next?

Now that you've stuck your toe in the waters of ACT Writing scoring, are you ready for more? Of course you are. Get into the depths of ACT Writing with this full analysis of the ACT essay grading rubric .

Is a longer essay always better? Find out how essay length affects your ACT Writing score here .

Completely confused about how the ACT Writing test is scored? You're not alone. Dispel your confusion with our complete guide to ACT essay scoring .

Curious about where your ACT Writing score stands in comparison to everyone else? Find out what an average ACT Writing score is in this article .

Want to improve your ACT score by 4 points?   Check out our best-in-class online ACT prep classes. We guarantee your money back if you don't improve your ACT score by 4 points or more.   Our classes are entirely online, and they're taught by ACT experts. If you liked this article, you'll love our classes. Along with expert-led classes, you'll get personalized homework with thousands of practice problems organized by individual skills so you learn most effectively. We'll also give you a step-by-step, custom program to follow so you'll never be confused about what to study next.   Try it risk-free today:

Laura graduated magna cum laude from Wellesley College with a BA in Music and Psychology, and earned a Master's degree in Composition from the Longy School of Music of Bard College. She scored 99 percentile scores on the SAT and GRE and loves advising students on how to excel in high school.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

Follow us on Facebook (icon)

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Gravatar Icon

Niche $10,000 "No Essay" Summer Scholarship

Help cover the cost of college without writing a single essay!

Niche is giving one student $10,000 to help pay for tuition, housing, books and other college expenses — no essay required!

Apply below for your chance to win so you can focus on your education, not your finances. The winner will be selected by random drawing by August 15, 2024. Good luck!

Min 7 characters

By proceeding you acknowledge and agree to our Privacy Policy and Terms of Use .

By proceeding you acknowledge and agree to our Privacy Policy and Terms of Use and  Scholarship Rules .

Who Can Apply

All high school and college students, as well as anyone looking to attend college or graduate school in the next year. Please note: Not everyone is eligible for this scholarship. Niche sponsored scholarships and sweepstakes are for people with US citizenship or a valid Visa/US passport only. Read the scholarship rules. Questions? Visit our Scholarship FAQs .

How It Works

The $10,000 “No Essay” Scholarship is an easy scholarship with no essay required! Only one entry allowed per person. The winner will be determined by random drawing and then contacted directly and announced in Niche's e-newsletter and on the Scholarship Winners page.

About Niche Scholarships

We believe cost shouldn’t keep anyone from pursuing a higher education, so we connect students with thousands of scholarships — many of which don’t require an essay — to help them afford college. In 2023 alone, we offered over $285,000 in Niche scholarships. Read more about Niche scholarships here or visit our FAQs .

IMAGES

  1. What's a Good Essay Score?

    how to score good in essay writing

  2. What's a Good Essay Score?

    how to score good in essay writing

  3. How to Score Excellent Grades through Essay Writing

    how to score good in essay writing

  4. Marks and score for sample essay

    how to score good in essay writing

  5. SAT Essay Scores Explained

    how to score good in essay writing

  6. Scoring procedure in thesis writing

    how to score good in essay writing

VIDEO

  1. ESSAY WRITING

  2. SCORE 140 + IN ESSAY ( WORKSHOP ON ESSAY WRITING ) BY MAJOR SPS OBEROI @ekamiasacademy_official

  3. Write an essay on Illiteracy

  4. EDITING AND PROOFREADING

  5. Write an essay on My Annual Sports

  6. Essay writing tricks for Competitive Exams

COMMENTS

  1. PDF Strategies for Essay Writing

    Harvard College Writing Center 5 Asking Analytical Questions When you write an essay for a course you are taking, you are being asked not only to create a product (the essay) but, more importantly, to go through a process of thinking more deeply about a question or problem related to the course. By writing about a

  2. How to Write Top-Graded Essays in English

    However, to write a top-graded essay in English, you must plan and brainstorm before you begin to write. Here are some strategies you can use during the prewriting stage: Freewriting. Looping. Concept Mapping. Outlining. For more detailed information on each of these processes, read "5 Useful Prewriting Strategies.".

  3. 10 steps to writing high-scoring IELTS essays

    Step one: Plan your time. The Writing test (consisting of Writing tasks 1 and 2) takes approximately 60 minutes. Plan to spend around 20 minutes on your first task, and 40 minutes on your essay task. A sample plan for your time might be: 5 to 10 minutes reading the essay question and planning your answer. 15 to 20 minutes writing your first draft.

  4. How to Get a Perfect 12 on the ACT Writing Essay

    Part II: The Difference Between a 10 and a 12. If we asked the ACT what the difference is between a 10 and a 12 ACT essay, they would direct us to their scoring criteria (replicated in the table below) that describes the difference between the 5 and 6 essay scores in each domain. As you may already know, a total domain score of 12 comes from ...

  5. The Beginner's Guide to Writing an Essay

    Come up with a thesis. Create an essay outline. Write the introduction. Write the main body, organized into paragraphs. Write the conclusion. Evaluate the overall organization. Revise the content of each paragraph. Proofread your essay or use a Grammar Checker for language errors. Use a plagiarism checker.

  6. SAT Essay Tips: 15 Ways to Improve Your Score

    A less effective essay might also try to discuss cheekbones, eyebrows, eyelashes, skin pores, chin clefts, and dimples as well. While all of these things are part of the face, it would be hard to get into detail about each of the parts in just 50 minutes. " The New Dance Craze ." ©2015-2016 by Samantha Lindsay.

  7. Writing High-Scoring IELTS Essays: A Step-by-Step Guide

    Writing great IELTS essays is essential for success. This guide will give you the tools to craft high-scoring essays. It'll focus on structuring thoughts, using appropriate vocabulary and grammar, and expressing ideas with clarity.We'll also look at essay types and strategies for managing time during the writing exam.. Practice is key.Spend time each day doing mock tests or getting ...

  8. The 15 Best GRE Essay Tips to Improve Your Score

    #9: You Don't Need a Perfect GRE Essay Score. Your GRE Writing score is a very, very small part of most grad school apps. A 4.5 or above is good enough for most programs, and there's certainly no need to sweat over not getting a perfect 6.0.The reason for this is that even if the rest of your application is mediocre, a high Writing score won't have a huge positive effect on your chances.

  9. Great essay writing in 8 steps

    It is also important that you leave time, ideally a couple of days, between finishing your first draft and proofreading. 3. Read widely. Writing may be the core task, but reading is equally important. Before you start writing your essay, you should conduct a broad search for relevant literature.

  10. How to Get a Perfect 8|8|8 SAT Essay Score

    (SAT essays are scored by two graders who each rate your essay on a scale of 1-4 in Reading, Analysis, and Writing; the two graders' scores are added together to get scores out of 8 for each domain.) Below, we've excerpted the criteria for a 3 and a 4 in all three domains and described the differences between the 3 and 4 score levels for ...

  11. Essay and dissertation writing skills

    A PDF providing further guidance on writing science essays for tutorials is available to download.. Short videos to support your essay writing skills. There are many other resources at Oxford that can help support your essay writing skills and if you are short on time, the Oxford Study Skills Centre has produced a number of short (2-minute) videos covering different aspects of essay writing ...

  12. Example of a Great Essay

    An essay is a focused piece of writing that explains, argues, describes, or narrates. In high school, you may have to write many different types of essays to develop your writing skills. Academic essays at college level are usually argumentative : you develop a clear thesis about your topic and make a case for your position using evidence ...

  13. How to Write the Perfect Essay: A Step-By-Step Guide for Students

    As well as some best practice tips, we have gathered our favourite advice from expert essay-writers and compiled the following 7-step guide to writing a good essay every time. 👍. #1 Make sure you understand the question. #2 Complete background reading. #3 Make a detailed plan. #4 Write your opening sentences.

  14. 2 Perfect-Scoring TOEFL Writing Samples, Analyzed

    Below is an official TOEFL Integrated Writing sample question and as well as an essay response that received a score of 5. It includes a written passage, the transcript of a conversation (which would be an audio recording on the actual TOEFL, and the essay prompt. After the prompt is an example of a top-scoring essay.

  15. A (Very) Simple Way to Improve Your Writing

    For instance, let's say you're writing an essay. There are three components you will be working with throughout your piece: the title, the paragraphs, and the sentences.

  16. Welcome to the Purdue Online Writing Lab

    The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out ...

  17. Tips for Creating and Scoring Essay Tests

    Prepare the essay rubric in advance. Determine what you are looking for and how many points you will be assigning for each aspect of the question. Avoid looking at names. Some teachers have students put numbers on their essays to try and help with this. Score one item at a time.

  18. ACT Writing Tips: 15 Strategies to Raise Your Essay Score

    Planning. Time: 8-10 minutes. #1: Decide on your thesis, choosing one of the three sides. You can try to form your own, fourth perspective, but since you have to compare your perspective with at least one other perspective, you might as well argue for one of given perspectives and save some time for writing.

  19. HOWTO: 3 Easy Steps to Grading Student Essays

    If the student shows excellent grammar, good organization and a good overall effect, he would score a total of ten points. Divide that by the total criteria, three in this case, and he finishes with a 3.33. which on a four-point scale is a B+. If you use five criteria to evaluate your essays, divide the total points scored by five to determine ...

  20. Essay Writing for Civil Services Examination

    Beginning Essay Writing. Practice makes perfect. For scoring good marks in the essay, one needs practice. But, before we get to the practising part, we should do some basic homework first. We would start by looking at what a well structured essay is, how it can be planned, what elements can be added to make the essay more scoring, and most ...

  21. 5 Strategies To Unlock Your Winning College Essay

    After drafting, take the time to revise and polish your writing. Seek feedback from teachers, mentors, or trusted friends, but ensure the final piece is unmistakably yours. A well-crafted essay ...

  22. Media Companies Are Making a Huge Mistake With AI

    Late last year Axel Springer, the European publisher that owns Politico and Business Insider, sealed a deal with OpenAI reportedly worth tens of millions of dollars over several years. OpenAI has ...

  23. Applying large language models for automated essay scoring for non

    Recent advancements in artificial intelligence (AI) have led to an increased use of large language models (LLMs) for language assessment tasks such as automated essay scoring (AES), automated ...

  24. (Updated) ACT Essay Scoring: Completely Explained

    Each ACT essay is scored by two different graders on a scale of 1-6 across four different domains, for a total score out of 12 in each domain. These domain scores are then averaged into a total score out of 12. NOTE: The ACT Writing Test from September 2015-June 2016 had a slightly different scoring scale; instead of averaging all the domain ...

  25. Transfer Essays Examples as a Good Start

    Students may also take it for an intriguing academic journey. You get a chance to pursue academic progress, career development, and personal growth. Your primary task will be to write a transfer essay. It's a valid framework for clarifying your position. Find some available transfer essay examples to gain a general understanding.

  26. How do credit scores work? 2 finance professors explain how lenders

    A credit score ranging from 670 to 739 is generally considered to be good, a score in the range of 580 to 669 would be judged fair, and a score less than 579 is classified poor, or subprime.

  27. What Is a Good ACT Writing Score?

    1-11. 2. source: two different ACT.org pages. As an example, Northwestern 's 25/75 range for ACT composite scores is 33-35, so you should aim for an overall ACT Writing score of between 10 and 11 out of 12. In general, as long as your Writing score percentile is in the general ballpark (within 20-30 percentile points) of your composite score ...

  28. Niche $10,000 "No Essay" Summer Scholarship

    Help cover the cost of college without writing a single essay! Niche is giving one student $10,000 to help pay for tuition, housing, books and other college expenses — no essay required! Apply below for your chance to win so you can focus on your education, not your finances. The winner will be selected by random drawing by August 15, 2024. Good luck!