top of page

AI Prompt Engineering for IT Pros Without Producing Slop


There is no shortage of nonsense written about AI prompt engineering. I am not claiming to be an expert, but I have spent time researching, testing, and working out what actually improves the output. AI, well acutally LLM, as it's not true AI, is now an invaluable assistant that doesn't charge by the hour or day.


Prompt engineering is simply the skill of asking clearly, giving the right context, setting boundaries, and knowing what sort of result you actually want. That is it. No magic phrases. No hidden syntax. No secret incantation that turns a chatbot into an infallible architect, engineer, or security consultant.


For technical people, that is actually good news. It means prompt engineering is not some ridiculous new profession. It is just the same engineering discipline you already use elsewhere, precision, structure, context, constraints, and validation.


The basic rule

Take this as a starting point.


A weak prompt looks like this:

  • How do I make an AI drone?


That sounds reasonable until you notice what is missing. No audience, no scope, no safety boundary, no skill level, no budget, no structure, and no clue whether the answer is supposed to be a quick overview, a shopping list, a build guide, or a technical architecture.


A better version looks like this:

  • Explain how to build a hobbyist AI drone for a technical reader familiar with Linux, Python, Jetson or Raspberry Pi devices, and basic electronics. Focus on a safe build using a flight controller, companion computer, camera module, telemetry link, and onboard computer vision. Structure the answer as hardware, software, integration, testing, and safety considerations. Keep it practical and avoid hype.


That is the heart of prompt engineering. Remove ambiguity, get better output.


A simple structure that works

Most useful prompts contain the same core parts, whether you are generating an article, reviewing code, analysing a screenshot, or extracting structured data.


You need the task, the context, the constraints, the source material if there's any, and the output shape.


In plain English, that usually means:

  • What do you want done.

  • Who is it for.

  • What matters.

  • What should be avoided.

  • What the result should look like.


A practical formula is:

  • Act as a [role].

  • Create [output].

  • It is for [audience].

  • Use this context: [context].Requirements: [constraints].

  • Avoid: [exclusions].Output as: [format].


Practical prompt examples

The easiest way to understand prompt engineering is to look at weak prompts beside stronger ones. The pattern becomes obvious very quickly.


Explaining the basics

Weak prompt:

  • How do I make an AI drone?


Better prompt:

  • Explain how a hobbyist can build an AI drone for learning purposes. Assume the reader understands Linux, Python, and basic electronics but has never built an autonomous drone before. Cover the main components, how they fit together, and the difference between a normal drone and one with onboard AI functions. Keep it practical and readable.


Why it works:

It defines the reader, the scope, and the level of explanation.


Writing a technical article

Weak prompt:

  • Write about AI drones.


Better prompt:

  • Write a technical article for engineers and advanced hobbyists on how to build an AI drone using a flight controller, companion computer, camera, and onboard vision model. Focus on real-world build choices, compute limits, latency, telemetry, control loops, and safety constraints. Keep it grounded, practical, and free of generic tech hype.


Why it works:

It narrows the topic and tells the model what matters.


Creating a beginner build guide

Weak prompt:

  • Give me the steps to build an AI drone.


Better prompt:

  • Create a beginner-friendly build guide for a hobbyist AI drone. Assume the reader is comfortable with Linux and Python but is new to drone hardware. Cover frame, motors, ESCs, flight controller, battery, camera, companion computer, telemetry, and safe testing. Keep it simple and structured.


Why it works:

It asks for a guide, defines the starting point, and gives the answer a clear shape.


Designing the system properly

Weak prompt:

  • Design an AI drone.


Better prompt:

  • Design a hobbyist AI drone using a standard flight controller for stabilisation and a separate companion computer for onboard AI processing. Explain the role of the ESCs, motors, GPS, IMU, camera, telemetry radio, battery, and onboard computer. Show how data flows between components and highlight likely issues with latency, power draw, and signal loss.


Why it works:

It turns a vague request into an actual architecture exercise.


Working within a budget

Weak prompt:

  • Suggest parts for an AI drone.


Better prompt:

  • Write a build plan for an AI drone with a budget of £800 to £1,200. Prioritise stability, safe testing, and offline object detection rather than acrobatics or long-range flight. Use a conventional flight controller and a separate companion computer, such as a Jetson Nano. Output the answer as a bill of materials, architecture summary, software stack, test plan, and known limitations.


Why it works:

It adds budget, priorities, and a specific output format.


Generating Python safely

Weak prompt:

  • Write Python for an AI drone.


Better prompt:

  • Write a Python 3 script for the Jetson Nano on a hobbyist AI drone that reads frames from a camera, performs basic object detection, and outputs structured telemetry events rather than directly controlling flight. Keep the code modular, readable, and safe for lab testing. Include error handling and log detection confidence, timestamp, and object class.


Why it works:

It defines what the code should do, what it should not do, and how it should behave.


Reviewing PowerShell properly

Weak prompt:

  • Check this script.


Better prompt:

  • Review this PowerShell script as if you were validating an engineering or audit tool. Focus on logic errors, false positives, bad assumptions, output usefulness, and edge cases. Do not rewrite the whole script unless necessary. Show the most important issues first, then provide targeted fixes.


Why it works:

It asks for review rather than a random rewrite.


Images and Screenshots

Weak prompt:

  • What is this screenshot?


Better prompt:

  • Analyse this screenshot and explain what the error likely means, what subsystem is involved, and what checks should be performed first. Do not just describe the image contents.


Why it works:

It asks for interpretation.


Generating an image

Weak prompt:

  • Make an image of an AI drone.


Better prompt:

  • Create a wide technical blog banner in a dark, high-tech style. The subject is an AI drone operating in a hostile RF environment, with subtle references to computer vision, telemetry, navigation, and control link interference. Keep it sharp, serious, and minimal. Avoid toy-drone styling, cartoon clichés, and generic stock art.


Why it works:

It defines purpose, style, and what to avoid.


Extracting structured data

Weak prompt:

  • Summarise this drone report.


Better prompt:

  • Read this report and extract the findings into JSON with fields for title, severity, affected_component, summary, evidence, and remediation. Keep the wording concise, preserve technical meaning, and do not invent missing details. If something is unclear, mark it as unknown rather than guessing.


Why it works:

It tells the model exactly how to shape the output.


Asking for a serious long-form article

Weak prompt:

  • Write a good article about AI drones.


Better prompt:

  • Write a long-form technical article on how to build a hobbyist AI drone using a flight controller, companion computer, camera, and onboard processing. Keep the tone direct, practical, technical, and grounded in real-world constraints. Cover hardware selection, software stack, vision processing, telemetry, latency, power draw, testing, and safety controls. Include practical examples from beginner to advanced level. No sales language, no obvious AI phrasing, and no exaggerated claims about autonomy or intelligence.


Why it works:

It defines tone, scope, and the usual traps to avoid.


How to validate output without reading every line

This is where AI-assisted work either becomes efficient or turns into a complete faff.


Cross Checking with Other Models:

If the task matters, use another model to challenge the answer. ChatGPT, Gemini, and Claude often fail in different ways, which makes cross-checking genuinely useful. You may notice I don't reference Microsoft's CoPilot, and there's a perfectly no nonsense straight up answer, and in my opinion, it's sub-optimal and on par with Bing as a service when compared to Google.


And yes, multiple models agreeing can increase confidence. It is still not proof. Treat it as confidence scoring, not truth.


Validating Output yourself:

If you're stuck with a particular model and need to validate every sentence manually, the time savings disappear. The answer is not to trust the output blindly. The answer is to validate more intelligently.


Start with structure. Are the right sections present? Are the main risks surfaced? Does the answer actually match the task? Is anything suspiciously specific without evidence behind it? Make the model expose its own weak points.


Ask for assumptions and uncertainty:

  • List any assumptions you made, anything that may be uncertain, and anything that should be verified before this is used operationally.


Ask for a self-critique:

  • Review your own answer and identify the weakest parts, likely inaccuracies, and anything that sounds more confident than the evidence supports.


A good review prompt for a second model is:

  • Review the following technical answer for factual accuracy, omissions, hidden assumptions, and overconfident wording. Do not rewrite it yet. First identify anything that looks wrong, weak, or unsupported.


When working from reports or documents, it also helps to force an evidence-first workflow:

  • Extract the key evidence points first. Then group them by severity. Then write a summary based only on those points.


For code, test behaviour, worry less about syntax. Run it with good input, bad input, and awkward edge cases.


What it is actually good for

Used properly, AI is good at drafting, rewriting, explaining, reviewing code, analysing screenshots, summarising reports, extracting structure from messy information, and turning rough notes into something usable.

It is also genuinely helpful for project work. If I can drag myself away from work, I may actually have time to build the AI drone. It can refine build guides, explain component roles, review Python snippets for image processing, help draft test plans, and turn messy technical notes into documentation that a person can actually read.


Final thoughts

The better you define the task, the audience, the constraints, and the output shape, the better the result tends to be. The less guessing the model has to do, the less rubbish you have to clean up afterwards.


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page