Long-tail

Sample MEXT Field of Study Statement 2027

A real MEXT field-of-study + research-plan statement, line-by-line annotated across 12 sections to show exactly what reviewers look for and what to cut.

Published: April 30, 2026

The Field of Study and Research Plan is the single document that decides whether your MEXT application reaches the interview. Two pages, plain text, no charts. Reviewers read fifty of these per cycle and they recognize a strong one in the first thirty seconds. If yours reads like a generic personal statement, you are out before anyone reaches the method section. This guide shows you exactly what reviewers look for, how to structure the two pages, what kills statements instantly, and a full annotated sample that you can use as a structural reference for your own. The sample is fictional — written for illustration, not as a real research proposal — but the structure, level of specificity, and tone match what gets through.

Why this two-page document decides everything

Among the nine documents in a MEXT 2027 application, the Field of Study and Research Plan is the one document where you exercise direct creative control. Your transcripts are fixed. Your recommendation letters are written by other people. The application form is mostly tickboxes. The medical certificate is filled in by a doctor. The research plan is the only place where reviewers see how you think — and that is exactly what they are checking. Embassy panel members and university selection committees use this document as the primary signal of academic readiness, because everything else can be inflated.

What this means in practice is that two equally credentialed applicants — same GPA, same English score, similar institution — get separated almost entirely on the strength of their research plan. A clean, specific, well-targeted plan moves you to the interview. A vague, buzzword-heavy plan with no named professor sends you to the rejection pile, regardless of how strong the rest of your file looks. For the broader application structure, see the MEXT 2027 complete guide; this page is the deep dive on the one document inside it that matters most.

The two-page format and structure

MEXT supplies a fixed two-page template inside the 2027 application packet. The template gives you a single text area with a small box at the top for your name and chosen field. It does not give you section headings, a bibliography page, or a budget section. You have to design your own structure inside the two pages, and the structure that wins is almost always the same:

  1. The problem (about half a page). Open with the open research question. State why it matters in two sentences. Cite two or three recent papers that frame the gap.
  2. Your method (about one full page). What you would actually do during the kenkyusei period and the Master's. Datasets, methods, baselines, evaluation. This is the largest section and the one reviewers spend the most time on.
  3. Why Japan, why this professor, why this lab (about half a page). Tie your method to the named professor's recent published work. Two papers minimum. A concrete proposal for what you would build on.

Inside that structure you can use bold or italics sparingly for emphasis, but you should not use section headings — the document reads better as continuous prose with paragraph breaks. Reviewers read it linearly, and headings break the flow without adding signal. The mockup looks more like a research grant abstract than a cover letter.

The five things reviewers look for

MEXT reviewer training documents — leaked summaries from former panel members, plus the published criteria from several embassy and university selection guides — converge on five concrete signals reviewers check when reading the research plan:

  1. A specific research problem. Not "AI in healthcare". Something narrower like "self-supervised pretraining for chest X-ray classification when labeled rare-disease data is scarce". Reviewers read the first paragraph specifically to see whether you can articulate a problem at the right scope.
  2. A named professor. A specific professor at a specific Japanese university. The full name. The lab name. Generic "I would like to study at a leading Japanese university" applications are eliminated at the first pass.
  3. At least one named recent paper from that professor. Title, year, and a one-sentence summary of what the paper showed. This is the signal that you have actually read the lab's work, not just looked at the lab's homepage. It also lets the reviewer (often an academic who knows the field) verify that you are reading the right paper.
  4. A feasible method. The method section needs enough technical detail that another researcher could see how you would start. You do not need to have already run experiments — but you do need to name datasets, methods, baselines, and an evaluation metric.
  5. Genuine fit between your method and the lab's work. The why-this-lab paragraph needs to make the case that your proposed method extends or complements recent work from the named professor. Reviewers can tell the difference between a candidate who is bending their interests to fit a lab they already chose, and one who independently picked the same direction.

For the longer interview-prep version of these signals, see what Japanese professors look for in international applicants. The same signals show up in both the field-of-study statement and the interview, because both are evaluated by overlapping panels.

What kills statements instantly

The pattern of failed statements is consistent across embassies and across years. Avoid every one of these:

  • Generic phrasing. "Japan is a leading country in technology", "the rich academic tradition of Japanese universities", "I have always been fascinated by Japanese culture". These sentences do nothing for the reviewer except eat space and signal that you are recycling boilerplate.
  • No specific Japanese paper cited. If your statement names only Western or non-Japanese papers, reviewers conclude that you are applying for funding rather than choosing a lab. Cite at least one recent paper from a Japanese group, and ideally one specifically from your target professor.
  • No professor named. The single fastest filter. A statement with no named professor is treated as a mass-produced application and rejected.
  • US-style admission essay format. Childhood anecdote opener, personal-journey arc, "I want to make the world a better place" closer. This format is for US private graduate-school personal statements. MEXT reviewers explicitly deprioritize it because it eats space that should go to the research problem.
  • Method section that is just a list of buzzwords. "I will use deep learning, transformer architectures, and reinforcement learning to advance the state of the art in medical AI" — three buzzwords and no concrete plan. Reviewers read this as not knowing the field.
  • Overpromising novelty. "My research will revolutionize the field" and "no one has ever attempted this approach before" are both red flags. Experienced reviewers know that any approach a Master's student can describe in a paragraph has almost certainly been attempted before. Claim a specific contribution, not a revolution.
  • Pages of citations. The two pages do not include a bibliography. Cite inline in parentheses with author and year. Three to six citations is plenty; ten is overkill.

Full annotated sample statement

The sample below is a complete two-page MEXT field-of-study statement, roughly seven hundred words, in the structure that wins. The topic is self-supervised pretraining for medical image classification, with reference to a fictional Professor Tanaka at the University of Tokyo. The professor and lab are not real — this is a structural model, not a real proposal — but the technical content is grounded enough to be plausible. Read the sample, then read the per-paragraph annotation that follows. Adapt the structure to your own topic; do not copy the wording. Selection committees read across applications and a copy-pasted sample is identified within minutes.

Field of Study and Research Plan

Applicant: [Your Full Name]
Field: Computer Science (Medical Image Analysis)

Research title: Self-supervised pretraining for label-efficient medical image
classification, with application to rare-disease chest radiography.

Background and problem. Deep convolutional and transformer-based models have driven
substantial gains in medical image classification over the past five years, but their
clinical applicability remains constrained by the labeled-data bottleneck. State-of-the-art
chest X-ray classifiers (e.g., Rajpurkar et al. 2017, Tiu et al. 2022) reach radiologist-
level performance on common findings precisely because large labeled corpora such as
CheXpert and MIMIC-CXR are available. For rare findings — pulmonary alveolar proteinosis,
hereditary hemorrhagic telangiectasia, certain pediatric interstitial diseases — labeled
data per institution is often in the tens, not the tens of thousands. The result is a
two-tier diagnostic landscape: AI tools that perform well on common conditions and not
at all on the long tail. The open research question I want to address in graduate school
is whether self-supervised pretraining on the very large pool of unlabeled chest
radiographs can transfer effectively to rare-disease classification, closing the gap that
the labeled-data bottleneck has created.

Proposed method. My research plan during the kenkyusei period and the Master's program
proceeds in three stages. In the first six months I will reproduce two recent
self-supervised pretraining baselines — DINOv2-style self-distillation and SimCLR-style
contrastive learning — on the publicly available MIMIC-CXR-JPG corpus (approximately
377,000 chest radiographs from 65,000 patients, fully unlabeled for the pretraining stage).
Reproduction matters here because reported transfer performance on medical imaging in the
self-supervised literature is unstable, and a clean baseline is the prerequisite for any
new contribution. In the second stage, roughly months six to fifteen, I will fine-tune
the pretrained backbones on a curated rare-disease subset compiled from the open NIH
ChestX-ray14 dataset and the MIMIC-CXR labeled split, comparing label-efficiency curves
between self-supervised and fully supervised initialization at training-set sizes from
ten to one thousand labeled examples per class. The primary evaluation metric is AUROC
under the rare-class macro average, with bootstrap confidence intervals. In the final
stage, months fifteen onward, I will investigate domain-specific pretraining objectives
that exploit the spatial structure of chest radiography — in particular, an objective
that aligns representations across paired frontal and lateral views. The hypothesis is
that view-aligned pretraining captures anatomical invariances that generic contrastive
objectives miss, and that this transfer advantage is largest precisely in the small-
labeled-data regime where rare diseases live.

Why Japan and Professor Tanaka's lab. Professor Tanaka's group at the University of Tokyo
publishes the most directly relevant recent work on self-supervised representation
learning for medical imaging. The 2025 paper from his group on contrastive pretraining
with anatomical priors (Tanaka et al., MICCAI 2025) demonstrated that injecting weak
spatial supervision into self-supervised objectives improves downstream sample
efficiency by roughly thirty percent on three public chest X-ray benchmarks. That paper
is the methodological foundation I would build on. A second paper from the same group
on cross-view consistency in radiograph pretraining (Sato and Tanaka, IEEE TMI 2026) is
the closest existing work to the view-aligned objective I propose in stage three of my
plan. No comparable line of work exists at the labs I have considered in my home country
or in the United States. I have written to Professor Tanaka separately and the lab has
indicated that it will accept new graduate students for the April 2027 cycle through the
University Recommendation track. I am also drawn to the Japanese medical-imaging research
ecosystem more broadly: the active collaboration between AMED-funded clinical centers and
machine-learning groups makes Japan one of the few places where a Master's student can
realistically work on real-world rare-disease imaging data alongside methodological
research. My JLPT N3 level is sufficient for daily lab life, and I plan to reach N2 by
the end of the kenkyusei period.
    

Note on the sample. Professor Tanaka, the cited Tanaka et al. 2025 paper, the Sato and Tanaka 2026 paper, and the specific lab affiliation in the sample above are fictional, used for illustration. When you write your own statement, replace these with a real professor, a real recent paper from that professor's group, and a real lab name — and verify the citations exist before submitting. Reviewers occasionally check.

Per-paragraph annotation

Header. Three lines: applicant name, broad field, and a specific research title. The research title alone tells the reviewer the topic, the method family, and the application. A title like "AI for healthcare" would already lose points; "Self-supervised pretraining for label-efficient medical image classification, with application to rare-disease chest radiography" is at exactly the right scope — narrow enough to be specific, broad enough to support two years of research.

Background and problem (about 200 words, half a page). Opens by naming the field and the bottleneck in two sentences. Cites two real benchmark papers (Rajpurkar 2017, Tiu 2022) to anchor the state of the art. Names three concrete rare diseases instead of saying "rare diseases generally" — this is the specificity signal reviewers look for. Closes by stating the open research question explicitly. The whole paragraph reads like the introduction of a research grant abstract, not like a personal statement. This is the right tone.

Proposed method (about 320 words, one full page). The decisive section. Three concrete stages with named timeframes. Each stage names a method family (DINOv2, SimCLR, view-aligned pretraining), at least one specific dataset (MIMIC-CXR-JPG, NIH ChestX-ray14), an evaluation framework (AUROC under rare-class macro average with bootstrap confidence intervals), and a clearly stated hypothesis at the end. A reviewer reading this paragraph can immediately see that the applicant has thought through what the first six months in the lab would actually look like. There is no "I will explore the possibilities of" hedge — the plan commits to specific experiments. That commitment is what separates a strong method section from a weak one.

Why Japan and Professor Tanaka's lab (about 230 words, half a page). Names the professor, names two specific papers from the lab with venue and year (MICCAI 2025, IEEE TMI 2026), and explicitly ties one paper to the methodological foundation of the plan and the other to the proposed extension. Notes that the applicant has already contacted the professor and received a positive signal — this is a strong move when true, because it tells the reviewer the lab is realistic. Adds a short note on the broader Japanese research ecosystem (AMED, clinical-ML collaboration) without falling into generic Japan-praise. Closes with a single sentence on Japanese language ability and JLPT plan. For the email that produced the lab's positive signal, see how to email a Japanese professor.

The three critical sentences

Inside any winning two-page statement there are three sentences that carry disproportionate weight. Reviewers find them in the first read and decide most of the verdict from them. Make sure your statement has all three:

  1. The problem statement. One sentence that names the open research question at the right scope. In the sample: "The open research question I want to address in graduate school is whether self-supervised pretraining on the very large pool of unlabeled chest radiographs can transfer effectively to rare-disease classification, closing the gap that the labeled-data bottleneck has created."
  2. The method declaration. One sentence that commits to a specific methodological approach and dataset. In the sample: "In the first six months I will reproduce two recent self-supervised pretraining baselines — DINOv2-style self-distillation and SimCLR-style contrastive learning — on the publicly available MIMIC-CXR-JPG corpus."
  3. The lab fit statement. One sentence that names the professor, names a specific recent paper, and links it to your method. In the sample: "The 2025 paper from his group on contrastive pretraining with anatomical priors (Tanaka et al., MICCAI 2025) demonstrated that injecting weak spatial supervision into self-supervised objectives improves downstream sample efficiency by roughly thirty percent on three public chest X-ray benchmarks."

If any one of these three sentences is missing or hand-wavy, the statement is weak. Identify yours during the second draft and tighten them until each one stands on its own as a complete claim.

Common revisions reviewers want

When MEXT panel members and lab professors read draft statements, the revision requests cluster into three categories. Anticipate them and your first draft already reads like a third draft.

More specific method

The most common request. A draft statement that says "I will apply machine learning to medical imaging" gets the comment "which method, which dataset, which evaluation". The cure is to commit. Name the dataset by its standard public abbreviation. Name the method family. Name the evaluation metric. If you do not yet know enough to name them, spend the next week reading two or three recent benchmark papers in your area until you do. Reviewers can tell the difference between an applicant who omitted specifics for space and one who omitted them because they did not have them.

Fewer buzzwords

The second most common request. A draft that uses "cutting-edge", "state-of-the-art", "novel", "innovative", "paradigm-shifting", "transformative", and "revolutionary" in the same paragraph reads as marketing copy. Strip every one of these adjectives. If a claim survives without the adjective, the adjective was filler. If the claim collapses without the adjective, the claim was the adjective — and it was hollow. Replace "cutting-edge transformer architecture" with "vision transformer". Replace "novel approach" with the specific technical contribution. Replace "state-of-the-art performance" with the actual benchmark number. The technical paragraphs in the sample above use almost no adjectives by design.

Less hand-waving on novelty

The third common request. Draft statements often claim that a specific approach has never been tried before, when a fifteen-minute literature search would show that it has. The fix is twofold. First, do the literature search before you draft. Second, scope the claim narrowly enough to be defensible — "view-aligned self-supervised pretraining for chest radiography under the small-labeled-data regime" is a defensible claim of contribution, while "the first AI for medical imaging" is not. Reviewers respect a narrow, defensible contribution. They do not respect a sweeping claim that anyone in the field would immediately recognize as overstated. For more on the broader decision-making frame around lab choice, see how to choose a Japanese graduate lab and the related considerations in kenkyusei vs direct Master's application.

Field-specific notes

The structure above applies across most STEM fields, but two adjustments are common. For computer science, AI, and machine-learning applicants, see Computer Science Master's in Japan and studying AI and ML in Japan for field-level context on which Japanese groups are strong and how to position your research plan inside the active research areas. The sample above is a CS sample, but the structure carries over to materials science, biology, civil engineering, and the social sciences with very minor adjustments.

For applicants who want to align their statement with the realistic application timeline and avoid common scheduling mistakes, see application timeline for Japanese graduate schools. Many statements suffer because they were written under deadline pressure; the fix is starting the lab-search and paper-reading work months earlier.

Final checklist before submission

Run through every item before you submit. Each item corresponds to a real reason a previous applicant got rejected.

  • The statement fits inside two pages of the official MEXT 2027 template, no overflow.
  • A specific research title appears in the header, narrow enough to support two years of work.
  • The first paragraph names the open research problem in one explicit sentence.
  • Two or three real, recent benchmark papers are cited inline with author and year.
  • The method section names at least one specific dataset, one method family, one baseline, and one evaluation metric.
  • A specific Japanese professor is named with full name and university affiliation.
  • At least one specific recent paper from that professor's group is cited with venue and year.
  • The why-this-lab paragraph explicitly ties your method to the cited paper from the lab.
  • No childhood anecdote or US-style personal opener.
  • No "cutting-edge", "state-of-the-art", "novel", or "paradigm-shifting" adjectives.
  • No claim that no one has ever attempted your approach before.
  • One short sentence on Japanese language ability and JLPT plan.
  • The document has been read by at least one academic in your field and one strong English editor.
  • Filename matches MEXT submission requirements (LastName_FieldOfStudy.pdf).

The recommendation-letter side of the application has its own checklist; see recommendation letter for Japanese grad school to make sure your two letter writers are aligned with the research plan. The two MEXT track-specific guides are MEXT Embassy Recommendation 2027 and MEXT University Recommendation 2027 — read whichever applies to your application before finalizing.

Once your draft has been through two rounds of revision, browse the professors directory to confirm your target professor's most recent publications appear correctly in your citation list. The directory is updated quarterly and surfaces recent papers that may not yet appear on the lab homepage.

Bottom line

The MEXT 2027 field-of-study statement is two pages of plain text that decide whether you reach the interview. Reviewers want a specific research problem stated in one sentence, a method section concrete enough that another researcher could see how to start, and a why-this-lab paragraph that names a specific Japanese professor, cites at least one of their recent papers, and ties your proposed method to that paper. They do not want a personal essay, a list of buzzwords, or a sweeping claim of novelty. The sample above is fictional but structurally complete; use it as a model, not a template. Spend at least two weekends reading recent papers from your target lab before you draft, take six to twelve weeks total from first draft to final submission, and run through the final checklist before you click submit. Applicants who treat this document with the seriousness it deserves are the ones who get interviewed; applicants who treat it as paperwork are the ones who get the rejection email in September.

Frequently asked questions

How long should the MEXT field-of-study statement be?

Two pages, single-spaced, 11pt or 12pt. The MEXT 2027 application form provides a fixed two-page template ("Field of Study and Research Plan") and reviewers expect you to fill it without overflow. A statement that runs to three pages is rejected at the formatting stage at most embassies, and one that runs only one page is read as underprepared. Aim for roughly 700 to 900 words across the two pages, with the proportions split as half a page on the problem, one full page on the method, and half a page on why Japan and which lab.

Do I have to name a specific Japanese professor?

Yes. A field-of-study statement that does not name a specific professor and at least one of their recent papers gets filtered before the interview at most embassies and at every university recommendation review. The reviewers are usually Japanese academics who know the field, and they are explicitly checking whether you have done the work of identifying a real lab that could host you. A generic statement that mentions only "a leading university in Japan" is read as a mass-produced application and discarded. If you do not yet have a target lab, use the four to six weeks before submission to find one.

Can I write the statement in Japanese?

You can, but you almost never should. MEXT accepts both English and Japanese, and unless you have JLPT N1 plus the writing comfort of a native-equivalent academic, English is the safer choice. Reviewers are looking for clear scientific reasoning, and a Japanese statement with even mild grammar awkwardness reads as weaker than the same content in clean English. Save your Japanese ability for the interview and for the closing line where you note your JLPT level. If you are still building Japanese reading and writing comfort, our <a href="/jlpt/jlpt-n3">JLPT N3 study hub</a> covers the level at which mixing Japanese into application materials starts to feel natural.

How specific does the method section need to be?

Specific enough that another researcher could read it and start to plan an experiment. "Apply deep learning to medical imaging" is too vague. "Pretrain a vision transformer on the unlabeled MIMIC-CXR chest X-ray corpus using SimCLR-style self-supervised contrastive learning, then fine-tune on a smaller labeled rare-disease subset and benchmark transfer performance against a fully supervised baseline" is the right level. You do not need to commit to every hyperparameter — but you do need to name the dataset, the method family, the evaluation, and the comparison baseline. Reviewers cut applicants who hand-wave on the method.

What is the single most common reason MEXT statements are rejected?

Generic phrasing combined with no named professor. The pattern looks like this: a strong-sounding paragraph about why Japanese research excellence inspires the applicant, a vague problem statement, a method section that lists three buzzwords, and a closing line saying the applicant hopes to study at "a top university in Japan". This statement is interchangeable with five hundred others received that cycle and gets rejected at the first-pass screen. The cure is concrete specifics — name a paper, name a professor, name a dataset, name a method.

Should I use US-style "personal statement" framing?

No. The MEXT field-of-study statement is a research plan, not a personal essay. US graduate admission essays often open with a childhood anecdote or a personal motivation story; that framing is read as off-topic by Japanese reviewers and uses space that should go to the research problem. Skip the "I have always loved Japan" opener, skip the "ever since I was a child" anecdote, and lead with the research question on line one. A small note about your motivation is fine in two or three sentences inside the why-Japan paragraph, but it should never be the lede.

How early should I start writing the statement?

Six to twelve weeks before the deadline. The first draft is usually written in three to four sittings, after which it sits with two or three reviewers for a week each, then comes back for at least two rounds of revision. Applicants who try to write the statement in one weekend produce visibly weaker work, and reviewers can tell. The longest part is not writing — it is reading enough recent papers from your target lab to be able to write the why-this-lab paragraph credibly. Budget at least two weekends for paper reading before you draft the statement.

Find a program that fits

Browse universities, English-taught programs, and scholarships for studying in Japan.