I recently started doing software development work to expand and integrate AI systems. Here are some of the biggest surprises I found along the way.I recently started doing software development work to expand and integrate AI systems. Here are some of the biggest surprises I found along the way.

My 5 Biggest Surprises as an AI Developer

I recently started doing software development work to expand and integrate AI systems.

Here are some of the biggest surprises I found along the way.

\

AI Developers Love Calling Everything “Fine Tuning”

\ When studying for the AI-102 exam, I thought fine tuning strictly referred to additional training of a model to include additional data (which can be fed into the system in different ways).

When actually working on an AI project, what I learned was developers (and architects) sprinkle this phrase on just about everything.

Do you need to get some more exposure to how LangGraph works ? Well that learning process is “fine tuning”.

Does the AI model need outside data and to assemble that into the response at run time ? You thought that was a RAG ? OK bro, I guess, but we’re going to call that fine tuning as well.

Is there a bug in the code anywhere ? We used to call that a feature, but it looks like we just need to do some “fine tuning”.

\

Successful People Use AI A Lot

I’m aware of the MIT study showing that most AI projects fail.

What people are less familiar with is all the data showing free lancers who use AI make a ton more money than those who don’t.

If you’re not using AI (especially on an individual level) to get work done quickly, you are going to be at a disadvantage.

A lot of companies (including one of my current clients) understand this and are trying to adopt this capability.

And this includes companies that aren’t traditionally seen as technology companies.

I was recently surprised by a colleague who allowed one of his AI certifications to expire. He said he wasn’t getting any bites from the market. This might be because he was trying a year ago and things have heated up since then. Or factors beyond our control. In any case, I landed a spot on an AI project within a week of passing the AI-102.

\

Successful People Are Discrete About Using AI

Just like students hide how much they are leaning on ChatGPT to answer their homework problems (in spite of the learning potential and capability it brings), successful AI projects hide how much they lean on AI systems to understand what is going on.

There may soon come a day when AI does not have the stigma that it currently has, but until that day happens, clients (and teachers) want any assurance you can give them that you have internalized (i.e. learned) what you are sharing with them.

Even if it is totally infeasible to think that a developer should understand minutia about network details.

Funny story, I talked to a high level AI worker in another company on the same project who wanted to collab with me about how to reformat CoPilot data so it doesn’t look so much like it came from CoPilot.

Thankfully, you don’t need anything quite so elaborate as that. You can always be discrete about where you got the data. In some companies it is SOP to tell the client “it doesn’t matter” where the data comes from.

Or alternatively, you can just tell the client you would need to investigate it. If it’s something exotic, then chances are your competitor does not have the answer off the top of their head and this approach is slightly more transparent and authentic.

\

AI Written Code is Too Verbose and Depends on Learning From Humans

AI written code quality can vary considerably (especially on the basis of what the AI was trained on). I’ve found that GitHub CoPilot can quickly add the features I need before having to do a deep examination of what the operating pieces of code are.

You can look at that as taking encapsulation to a new level or as criminal negligence, but we are likely heading toward a software market where you are set back for over analyzing the code components. An AI veteran on my project helped me integrate GHCP with my IDE and encouraged others on the project to meet our deadlines by using AI.

Anticipate having to make adjustments to the output.

Depending on your framework (python in this case), you might need to pare down the outcomes of three or four levels of truthiness and simplify the data handling to a few cases that are relevant to your project.

\ The Disruption Means a Lot of New Things to Try

While you should be careful about re-inventing the wheel even in something as new and disruptive as AI technologies (for example, your skills in training and building LLMs are extremely narrow, niche) … a lot of the basics in new AI tools are still being ironed out.

That means there is tremendous opportunity for creating new capabilities and mastering skills that no one else has yet.

For example, what is the best way to have an AI select (orchestrate) between tools ? Should you use a single agent ? An established framework like Semantic Kernel, Agent Foundry, LangGraph, or CoPilot Studio ?

There’s no obvious solution. Exploration is needed.

Another example is RAGs (or Retrieval Augmented Generation). As mentioned earlier, these paradigms extend what a model is capable of providing content on … and there is no platform that is as simple as clicking a button to provide that (although some claim they have this capability).

\ Software development using AI is a lot like the Wild West right now. You’ll find one new thing happening over here and a completely different new thing happening over there. My suggestion is to try things out, learn from other people’s mistakes, and discover what interests you.

\ \

Market Opportunity
MY Logo
MY Price(MY)
$0.071
$0.071$0.071
-2.06%
USD
MY (MY) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week

Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week

TLDR Bitcoin ETFs recorded their strongest weekly inflows since July, reaching 20,685 BTC. U.S. Bitcoin ETFs contributed nearly 97% of the total inflows last week. The surge in Bitcoin ETF inflows pushed holdings to a new high of 1.32 million BTC. Fidelity’s FBTC product accounted for 36% of the total inflows, marking an 18-month high. [...] The post Bitcoin ETFs Surge with 20,685 BTC Inflows, Marking Strongest Week appeared first on CoinCentral.
Share
Coincentral2025/09/18 02:30
XAG/USD retreats toward $113.00 on profit-taking pressure

XAG/USD retreats toward $113.00 on profit-taking pressure

The post XAG/USD retreats toward $113.00 on profit-taking pressure appeared on BitcoinEthereumNews.com. Silver price (XAG/USD) halts its seven-day winning streak
Share
BitcoinEthereumNews2026/01/30 10:21
BTC Leverage Builds Near $120K, Big Test Ahead

BTC Leverage Builds Near $120K, Big Test Ahead

The post BTC Leverage Builds Near $120K, Big Test Ahead appeared on BitcoinEthereumNews.com. Key Insights: Heavy leverage builds at $118K–$120K, turning the zone into Bitcoin’s next critical resistance test. Rejection from point of interest with delta divergences suggests cooling momentum after the recent FOMC-driven spike. Support levels at $114K–$115K may attract buyers if BTC fails to break above $120K. BTC Leverage Builds Near $120K, Big Test Ahead Bitcoin was trading around $117,099, with daily volume close to $59.1 billion. The price has seen a marginal 0.01% gain over the past 24 hours and a 2% rise in the past week. Data shared by Killa points to heavy leverage building between $118,000 and $120,000. Heatmap charts back this up, showing dense liquidity bands in that zone. Such clusters of orders often act as magnets for price action, as markets tend to move where liquidity is stacked. Price Action Around the POI Analysis from JoelXBT highlights how Bitcoin tapped into a key point of interest (POI) during the recent FOMC-driven spike. This move coincided with what was called the “zone of max delta pain”, a level where aggressive volume left imbalances in order flow. Source: JoelXBT /X Following the test of this area, BTC faced rejection and began to pull back. Delta indicators revealed extended divergences, with price rising while buyer strength weakened. That mismatch suggests demand failed to keep up with the pace of the rally, leaving room for short-term cooling. Resistance and Support Levels The $118K–$120K range now stands as a major resistance band. A clean move through $120K could force leveraged shorts to cover, potentially driving further upside. On the downside, smaller liquidity clusters are visible near $114K–$115K. If rejection holds at the top, these levels are likely to act as the first supports where buyers may attempt to step in. Market Outlook Bitcoin’s next decisive move will likely form around the…
Share
BitcoinEthereumNews2025/09/18 16:40