Skip to main content

GI AGENT 1 - Mikey the Mazda Mechanic V1.0

Well since we've got liftoff and all drives are intact, it's probably time to install some kind of Artificial Intelligence into Garage Intelligence and see what it can do (if anything.) And in my first experiment, I want to build an agent that can consult the Mazda repair manual for my 2012 Mazda 6, so I don't waste time trying to find the fix I'm looking for come service/troubleshoot time.


Surely something like that could work here? 

Surely?

AGENT NAME - Mikey the Mazda Mechanic

ROLE - I want to feed Mikey repair manuals and parts lists and have it on hand anytime I have questions about having to do basic maintenance.

PROGRAMS USED - Ollama + Anything LLM Desktop

Well I would if you behaved..


Since it's my first time mucking about with an AI on this system, I figured the best place to ask for this particular application was...another AI! So putting the proposal to Perplexity, the suggestion returned quick smart that for something like this, I should use Ollama and have Anything LLM as the front end.

Immediately this reminded me of the days building my arcade cabinet where I'd be using M.A.M.E arcade software but another program as a front end to make it look pretty. In this case however it soon turned out that I could use the front end but the back end decided to take some time off for Easter..

PROBLEMS ENCOUNTERED - Oh god, how much time have you got?

-Ollama took an ice age to download. Okay, actual hours. Which is amazing considering I could get a 19gb file from another site in under one hour but it took closer to 6 and a half hours to download Ollama and it's only 1.7GB. And this was downloading on my much faster computer upstairs too.
Downstairs I had a download going which my son stopped, figuring it might be faster to download it and install it through the Powershell option.
It was just as if not slower than everything else. 

Is everyone downloaded Ollama all at the same time here?

-Problem number 2 occurred when I tried to start Ollama and nothing happened. The .EXE version coughed, spluttered and then went back to sleep. The CMD line start up coughed up some code explaining that there wasn't enough VRAM or something.

Look I get it, you're angry!

Which is strange because people on Reddit have made Ollama work with a K2000 so go figure. For whatever reason mine doesn't want to play ball, so suddenly I'm trying to make an A.I agent with half the ingredients. Luckily Anything LLM jumped right to it!

COULD ANYTHING LLM PICK UP THE SLACK THERE?

Well not quite. I mean it worked right out of the box.


And amazingly it took one look at what I was working with and even with my underpowered for AI batch of bits and pizzas, it figured it could do something with it: 

Qwen3 Vison 4B Instruct - no idea what it does but ask for it by name!

Yes, how nice it was to suggest the model it did and didn't once suggest working hand in hand with the sleeping Llama now taking up some space on my first drive. 

And just like that we have a working AI on Garage Intelligence! However...I can't say it's a particularly fast one. In fact it's painfully slow at some simple questions. 

Firstly I asked if it was possibly to change it's name (to Mikey the Mazda Mechanic obviously) and it told me that was fine...after roughly five minutes of thinking time. Yes I could have walked back upstairs, made myself a cuppa and come down to find it still mulling over my great idea. However when I suggested Mikey, it took only 30 seconds to comment that it was a great name. Okay then.

Then I tried to feed it a simple text file just so I could get used to the process. 

Red means go yeah?

And it took quite a while to not only learn what it could and couldn't do when looking for files (I have no idea how to give it authorization to access specific folders yet) but also how to tell it to ingest something and keep it in it's memory for later. 

Eventually I discovered that if you upload the file to it and then tell it to rag-memory the attached, it will add the txt file and all it's contents to it's own files. And to test that it was all working and above board, I asked it to tell me what oil filter I was currently using.
Which it did. 
Eventually.



Let me zoom in just in case you can't make out the ice age it's taken here;


182 seconds is a touch over 3 minutes. Yes it's taken over 3 minutes to look into it's current information database (of about 4 lines of text) and finally track down what Oil Filter I have on record. 

And in case you're wondering, Tok/s is tokens per second. On and older version of ChatGPT, it's using about 4000 tokens per second in producing, formulizing and constructing answers which is why it can come back to you with lots of something, within seconds. In comparison to that Formula One car, my horse and cart currently has no horse. And only three of the four wheels. But you know what? IT STILL WORKS!

SO IT WORKS JIM, BUT NOT AS WE KNOW IT

Yes you have to take the day off for an answer, but it does give you an answer eventually. And that's not bad at all on my very first test with a dead llama on my hands and no instruction manual other than trail and error handy. 

We know A.I works on this thing (albeit painfully slowly) but let's see if we can improve from here!

Comments