Treat ChatGPT like a junior dev on your team — helpful, but always needing review.Treat ChatGPT like a junior dev on your team — helpful, but always needing review.

Using ChatGPT Like a Junior Dev: Productive, But Needs Checking

2025/09/24 14:12

AI coding assistants like ChatGPT are everywhere now. They can scaffold components, generate test cases, and even debug code. But here’s the catch: they’re not senior engineers. They don’t have context of your project history, and they don’t automatically spot when the tests themselves are wrong.

In other words: treat ChatGPT like a junior dev on your team — helpful, but always needing review.

\


My Experience: Fixing Legacy Code Against Broken Tests

I was recently working on a legacy React form validation feature. The requirements were simple:

  • Validate name, email, employee ID, and joining date.
  • Show error messages until inputs are valid.
  • Enable submit only when everything passes.

The tricky part? I didn’t just have to implement the form — I had to make it pass an existing test suite that had been written years ago.

I turned to ChatGPT for help, thinking it could quickly draft a working component. It generated a solution — but when I ran the tests, they kept failing.

At first, I thought maybe I had misunderstood the requirements, so I asked ChatGPT to debug. We went back and forth multiple times. I provided more context, clarified each input validation rule, and even explained what the error messages should be. ChatGPT suggested fixes each time, but none of them worked.

It wasn’t until I dug into the test suite myself that I realized the real problem: the tests were wrong.

\


The Test That Broke Everything

One test hard-coded "2025-04-12" as a “future date”:

changeInputFields("UserA", "user@email.com", 123456, "2025-04-12"); expect(inputJoiningDate.children[1])   .toHaveTextContent("Joining Date cannot be in the future"); 

The problem? We’re already past April 2025. That date is no longer in the future, so the expected error message would never appear. The component was fine — the tests were broken.

I had to dig through the logic, analyze the assumptions, and rewrite the test with relative dates, like so:

// Corrected test using relative dates const futureDate = new Date(); futureDate.setDate(futureDate.getDate() + 30); // always 30 days ahead const futureDateStr = futureDate.toISOString().slice(0, 10);  changeInputFields("UserA", "user@email.com", 123456, futureDateStr); expect(   screen.getByText("Joining Date cannot be in the future") ).toBeInTheDocument(); 

This small change makes your test time-proof, so it will work regardless of the current year.

\


Lessons Learned

  1. AI will follow broken requirements blindly - ChatGPT can’t tell that a test is logically invalid. It will try to satisfy the failing test, even if the test itself makes no sense.

  2. Treat output like a junior PR - ChatGPT’s suggestions were helpful as scaffolding, but it struggled to see the root cause. I had to step in, dig through the legacy code, and analyze the tests myself.

  3. Tests can rot too - Hard-coded dates, magic numbers, or outdated assumptions make test suites brittle. If the tests are wrong, no amount of component fixes will help.

  4. Relative values keep tests reliable - Replace absolute dates or values with calculations relative to today. This ensures your tests work across time.

    \


How to Work Effectively With AI Tools

  • Give context, but don’t rely on it to reason like a senior dev.

  • Ask “why”, and inspect its explanations carefully.

  • Validate everything yourself — especially when working with legacy code.

  • Iteratively refine — use AI as scaffolding, but you own the fix.

    \


Closing Thoughts

My experience taught me a simple truth: AI can accelerate coding, but it cannot replace human judgment, especially when dealing with messy, legacy code and outdated tests.

Treat ChatGPT like a junior teammate:

  • Helpful, eager to please, fast.
  • Sometimes confidently wrong.
  • Needs review, oversight, and occasionally, a reality check.

If you keep that mindset, you’ll get the productivity boost without blindly following bad guidance — and you’ll know when to dig in yourself.


💡 Takeaway: When working with code, the human developer is still the ultimate problem-solver. AI is there to assist, not to replace your reasoning.

Aviso legal: Los artículos republicados en este sitio provienen de plataformas públicas y se ofrecen únicamente con fines informativos. No reflejan necesariamente la opinión de MEXC. Todos los derechos pertenecen a los autores originales. Si consideras que algún contenido infringe derechos de terceros, comunícate con service@support.mexc.com para solicitar su eliminación. MEXC no garantiza la exactitud, la integridad ni la actualidad del contenido y no se responsabiliza por acciones tomadas en función de la información proporcionada. El contenido no constituye asesoría financiera, legal ni profesional, ni debe interpretarse como recomendación o respaldo por parte de MEXC.
Compartir perspectivas

También te puede interesar

FCA, crackdown on crypto: Consumer Duty and custody rules

FCA, crackdown on crypto: Consumer Duty and custody rules

Crypto regulation in the United Kingdom enters a decisive phase. The FCA has initiated a consultation to set minimum standards.
Compartir
The Cryptonomist2025/09/17 22:50
Compartir