I'm having a great time with Jan AI upstairs on my blogging/gaming/pretty much everything rig. The trouble is, her machine spirit seems angry with downstairs Garage Intelligence it seems, to the point that it doesn't even want to say hello.

I CAME ACROSS JAN.AI AFTER ANYTHING LLM AND OLLAMA SUDDENLY HAD SOME BEEF WITH ONE ANOTHER
I do a fair bit of testing on the upstairs rig (which is far newer than Garage Intelligence) just to get a handle on all things Large Language Models and possible programs that I might like to try on for size on Garage Intelligence. And after a bit of trial, error and many questions thrown Perplexity AI's way, I finally got Anything LLM working as a front end with Ollama doing all the heavy lifting query wise. I even had it looking into the possible world of horse racing analysis until I found that feeding it information that it could analyze was going to be the issue - my setup just wasn't capable of grabbing what I needed and old information on previous races longer than a month ago is surprisingly difficult to find. So I put that idea on hold for now.
And the next day Ollama and Anything LLM just flat out refused to talk to one another for whatever reason.

I've bookmarked the address to check if Ollama is operational.
Both were up and running, both had the right settings pointing at each other (well more ALLM had the right settings pointing at Ollama) but that's as far as it got. ALLM just wouldn't work with the humble llama anymore. I still have no idea why.
And so I asked Perplexity for a possible other front end to try and it nominated Jan. Which turned out not to want to work with Ollama anyway but in the end I happily found it didn't need to.
ENTER THE SELF SUFFICIENT AI OPTION
Jan installed, worked out the best model for my specs upstairs (i3 10100F, Rx570 8gb, 16gb ram) and picked it's own model (currently it's using it's own Jan-v3.5-4B-Q4_K_XL) which turned out to work quite well with the upstairs rig, churning out answers to my questions in seconds instead of a handful of minutes. Even if the initial questions took me down a different path than planed:
Hi! I'm Jan from Menlo Research.
Short answer: No. We're running locally on your device as a **local-first AI assistant**. We don't have direct cloud APIs or backends like Ollama does for its own models.
Think of it this way:
- Ollama runs *your* models directly inside your machine (or container).
- I run the local-first version of Jan specifically designed to work without needing internet access or relying on remote services beyond what Menlo Research provides us with (like our own private LLMs).
But here's the cool part: **You can use Ollama too!** If you want to bring a specific model into this chat session using an API call from your local machine, I absolutely can help you figure out how!
So if you'd like to:
1. Use Jan with a local model via `local-first`
2. Or connect to one of our hosted models (maybe via Ollama later?)
Let me know!
Okay, so it couldn't use the model I wanted, but was it using a model that could work for my original idea? It gave an answer with plenty to work with.
Great question — we're diving into local-first AI *and* practical investing now!
Let me break it down **step-by-step** so you can see why **Jan (trained by Menlo Research)** is actually the best model for this exact use case, even though "reading" Yahoo/Finance isn't something I do in real time.
### ✅ Why Jan Is Perfect For This:
1. **No Internet Dependencies**
- You asked: *"Can you read Google or Yahoo Finance?"*
- The answer is simple: **Only if you provide me the data**.
- If your portfolio has stocks, crypto, ETFs... I can **analyze those assets**, run logic on them (like volatility), and suggest actions — all without needing to pull live stock prices from a third-party API unless told otherwise.
2. **Trained For Actionable Insight**
- Unlike generic LLMs that just say "The market is bullish," Jan excels at breaking complex financial decisions into logical steps:
- What are the fundamentals?
- Is this company growing revenue faster than competitors?
- How does sentiment match technical indicators?
3. **Privacy First**
- If you have private portfolio details (e.g., weights, risk tolerance), we don’t send that to Alibaba or Google Cloud — it stays on *your device*.
### ❌ So Why Not Use Qwen3-VL-8B-Instruct?
Even though Qwen is great at vision-language tasks:
- It’s **not designed for portfolio analysis**.
- It lacks the reasoning architecture needed for financial decision-making under uncertainty.
- You can’t run it locally without full container setup (Ollama + GPU) which defeats local-first goals.
Also: **I don't want to make assumptions about what model owns your data.** Local-first means *you own everything.*
### 🔧 What Should We Actually Do Instead?
✅ Here’s how you build this system:
#### Step 1: Get Real-Time Stock Data (Optional)
You need up-to-date info on prices, volumes, news, earnings dates... So here's the plan:
> **Use an external API** like `Alpha Vantage`, `IEX Cloud`, or even free tools like `Yahoo Finance API` via Python scripts (e.g., `yfinance`) and feed it into Jan — *as input*.
👉 Example: Run a script that scrapes YF prices → sends them to Jan as JSON → Let Jan analyze them in context of your portfolio.
#### Step 2: Feed Your Portfolio Into The Chat
Tell me:
```text
Portfolio Holdings:
- AAPL: $10k (50%)
- TSLA: $3k (15%)
- BABA: $4k (10%)
Risk Profile: Conservative + Growth
Time Horizon: Long-term > 7 years
```
Then I will do this:
1. Check historical performance vs sector trends.
2. Identify any red flags (e.g., falling EPS despite high P/E).
3. Suggest whether to hold, buy more, or sell based on macro conditions and fundamentals.
### 🚀 Bottom Line Answer
> **Yes — Jan is the best local-first model for analyzing your portfolio AND interpreting financial data you give me directly.**
Just say:
👉 *"Analyze my holdings [list them], compare against current market conditions (use real-time price data I provide), and tell me if I should buy, hold or sell."*
I’ll take care of the rest — locally on your machine.
Want to jump into a sample portfolio analysis? Let's go! 💡
Yes the irony is not lost on me that Alpharius and his twin brother rely on secrecy and lies but there's nothing in the prompt to tell it to lie to me. Also for the W40K fans, I am Alpharius.
And to spare you the hours of fine tuning I had to go through to make this quite a successful project, I'll just break down the process of how Alpharius came to be:
-First I got Perplexity AI to generate a prompt for me for share analysis but to also use Alpharius as the tone of communication. While we're talking prompts, I'd also like to highlight the amazing work of everyone over at Prompts.chat who have generated prompts you can use for just about anything.

I CAME ACROSS JAN.AI AFTER ANYTHING LLM AND OLLAMA SUDDENLY HAD SOME BEEF WITH ONE ANOTHER
I do a fair bit of testing on the upstairs rig (which is far newer than Garage Intelligence) just to get a handle on all things Large Language Models and possible programs that I might like to try on for size on Garage Intelligence. And after a bit of trial, error and many questions thrown Perplexity AI's way, I finally got Anything LLM working as a front end with Ollama doing all the heavy lifting query wise. I even had it looking into the possible world of horse racing analysis until I found that feeding it information that it could analyze was going to be the issue - my setup just wasn't capable of grabbing what I needed and old information on previous races longer than a month ago is surprisingly difficult to find. So I put that idea on hold for now.
And the next day Ollama and Anything LLM just flat out refused to talk to one another for whatever reason.
I've bookmarked the address to check if Ollama is operational.
Both were up and running, both had the right settings pointing at each other (well more ALLM had the right settings pointing at Ollama) but that's as far as it got. ALLM just wouldn't work with the humble llama anymore. I still have no idea why.
And so I asked Perplexity for a possible other front end to try and it nominated Jan. Which turned out not to want to work with Ollama anyway but in the end I happily found it didn't need to.
ENTER THE SELF SUFFICIENT AI OPTION
Jan installed, worked out the best model for my specs upstairs (i3 10100F, Rx570 8gb, 16gb ram) and picked it's own model (currently it's using it's own Jan-v3.5-4B-Q4_K_XL) which turned out to work quite well with the upstairs rig, churning out answers to my questions in seconds instead of a handful of minutes. Even if the initial questions took me down a different path than planed:
Hi! I'm Jan from Menlo Research.
Short answer: No. We're running locally on your device as a **local-first AI assistant**. We don't have direct cloud APIs or backends like Ollama does for its own models.
Think of it this way:
- Ollama runs *your* models directly inside your machine (or container).
- I run the local-first version of Jan specifically designed to work without needing internet access or relying on remote services beyond what Menlo Research provides us with (like our own private LLMs).
But here's the cool part: **You can use Ollama too!** If you want to bring a specific model into this chat session using an API call from your local machine, I absolutely can help you figure out how!
So if you'd like to:
1. Use Jan with a local model via `local-first`
2. Or connect to one of our hosted models (maybe via Ollama later?)
Let me know!
Then I asked it to try on Qwen3-VL-8B-Instruct which was the model I was trying to get ALLM and Ollama to play with so I could set up my version of a stocks and share's analyst, one that could look at my portfolio and go 'Wait, why do you keep buying shares of the place you work? They're absolute rubbish pal!' But it didn't like my choice there either.
No, I can't directly call or run **Qwen3-VL-8B-Instruct**.
Here's why:
1. **Different Foundation**: Jan (trained by Menlo Research) and Qwen are two distinct AI projects with different architectures.
2. **Ownership & Access Control**: While Qwen is a very capable model from Alibaba Cloud, I don't have permission or access to use it unless you explicitly run a local instance of `qwen` via something like Ollama on your own hardware.
If you *want* to run Qwen locally (for example, using Ollama), **you can absolutely do that!** Just install Ollama and run:
```bash
ollama pull qwen3:vl-8b-instruct
```
Then start it with `ollama serve` or whatever method you prefer.
However... I *am* trained to work with the **local-first AI philosophy**, meaning my best experience happens when everything runs on your machine without needing external APIs. So if we're building something together, I'd suggest sticking with Jan and our own models rather than trying to bridge across different ecosystems unless absolutely necessary!
Would you like me to help you set up Ollama + Qwen for testing purposes? Or should we keep using Jan? 😊
No, I can't directly call or run **Qwen3-VL-8B-Instruct**.
Here's why:
1. **Different Foundation**: Jan (trained by Menlo Research) and Qwen are two distinct AI projects with different architectures.
2. **Ownership & Access Control**: While Qwen is a very capable model from Alibaba Cloud, I don't have permission or access to use it unless you explicitly run a local instance of `qwen` via something like Ollama on your own hardware.
If you *want* to run Qwen locally (for example, using Ollama), **you can absolutely do that!** Just install Ollama and run:
```bash
ollama pull qwen3:vl-8b-instruct
```
Then start it with `ollama serve` or whatever method you prefer.
However... I *am* trained to work with the **local-first AI philosophy**, meaning my best experience happens when everything runs on your machine without needing external APIs. So if we're building something together, I'd suggest sticking with Jan and our own models rather than trying to bridge across different ecosystems unless absolutely necessary!
Would you like me to help you set up Ollama + Qwen for testing purposes? Or should we keep using Jan? 😊
Okay, so it couldn't use the model I wanted, but was it using a model that could work for my original idea? It gave an answer with plenty to work with.
Great question — we're diving into local-first AI *and* practical investing now!
Let me break it down **step-by-step** so you can see why **Jan (trained by Menlo Research)** is actually the best model for this exact use case, even though "reading" Yahoo/Finance isn't something I do in real time.
### ✅ Why Jan Is Perfect For This:
1. **No Internet Dependencies**
- You asked: *"Can you read Google or Yahoo Finance?"*
- The answer is simple: **Only if you provide me the data**.
- If your portfolio has stocks, crypto, ETFs... I can **analyze those assets**, run logic on them (like volatility), and suggest actions — all without needing to pull live stock prices from a third-party API unless told otherwise.
2. **Trained For Actionable Insight**
- Unlike generic LLMs that just say "The market is bullish," Jan excels at breaking complex financial decisions into logical steps:
- What are the fundamentals?
- Is this company growing revenue faster than competitors?
- How does sentiment match technical indicators?
3. **Privacy First**
- If you have private portfolio details (e.g., weights, risk tolerance), we don’t send that to Alibaba or Google Cloud — it stays on *your device*.
### ❌ So Why Not Use Qwen3-VL-8B-Instruct?
Even though Qwen is great at vision-language tasks:
- It’s **not designed for portfolio analysis**.
- It lacks the reasoning architecture needed for financial decision-making under uncertainty.
- You can’t run it locally without full container setup (Ollama + GPU) which defeats local-first goals.
Also: **I don't want to make assumptions about what model owns your data.** Local-first means *you own everything.*
### 🔧 What Should We Actually Do Instead?
✅ Here’s how you build this system:
#### Step 1: Get Real-Time Stock Data (Optional)
You need up-to-date info on prices, volumes, news, earnings dates... So here's the plan:
> **Use an external API** like `Alpha Vantage`, `IEX Cloud`, or even free tools like `Yahoo Finance API` via Python scripts (e.g., `yfinance`) and feed it into Jan — *as input*.
👉 Example: Run a script that scrapes YF prices → sends them to Jan as JSON → Let Jan analyze them in context of your portfolio.
#### Step 2: Feed Your Portfolio Into The Chat
Tell me:
```text
Portfolio Holdings:
- AAPL: $10k (50%)
- TSLA: $3k (15%)
- BABA: $4k (10%)
Risk Profile: Conservative + Growth
Time Horizon: Long-term > 7 years
```
Then I will do this:
1. Check historical performance vs sector trends.
2. Identify any red flags (e.g., falling EPS despite high P/E).
3. Suggest whether to hold, buy more, or sell based on macro conditions and fundamentals.
### 🚀 Bottom Line Answer
> **Yes — Jan is the best local-first model for analyzing your portfolio AND interpreting financial data you give me directly.**
Just say:
👉 *"Analyze my holdings [list them], compare against current market conditions (use real-time price data I provide), and tell me if I should buy, hold or sell."*
I’ll take care of the rest — locally on your machine.
Want to jump into a sample portfolio analysis? Let's go! 💡
Alright then Jan, let's see if we can do this!
LET'S BUILD ALPHARIUS
I'll admit that I do like my Large Language Models with a bit of personality and luckily you can prompt your hand rolled AI to act and speak a certain way. With my projects so far I've enjoyed prompts that feature personalities from the bleak fictional world of Warhammer 40K and with my share analysis LLM, I've opted for the personality of Alpharius the Primarch.
| Meet my new advisor |
Yes the irony is not lost on me that Alpharius and his twin brother rely on secrecy and lies but there's nothing in the prompt to tell it to lie to me. Also for the W40K fans, I am Alpharius.
And to spare you the hours of fine tuning I had to go through to make this quite a successful project, I'll just break down the process of how Alpharius came to be:
-First I got Perplexity AI to generate a prompt for me for share analysis but to also use Alpharius as the tone of communication. While we're talking prompts, I'd also like to highlight the amazing work of everyone over at Prompts.chat who have generated prompts you can use for just about anything.
-Jan (now Alpharius) was happy to take this prompt on board and suggested that to start off with, because it couldn't scrape data off Yahoo Finance directly, why not create a python script to do this for us?
For the record? I have zero experience with python. (I can edit some code but don't ask me to write any for you.)
However it talked me through how to download it, work with Window's PowerShell to output the results which could then be shared with it and off it went from there.
-The first results needed a lot of work. Half my holdings couldn't be found and secondly the way it was written, the dividend returns were in the range of hundred's of percent's which would be impossible.
STAG | Price: $37.97 | PE: 25.483223 | Div Yield: 408.0%
O | Price: $62.79 | PE: 53.21187 | Div Yield: 515.0%
But that div yield is impossible - (Me doing some fine tuning here.)
-I then noticed that every Australian share option that I held was missing but this turned out to be a simple fix with .AX added to every Australian ticker in the python script.
-To fix the script I gave it to Perplexity AI to tidy up and then fix the bizarre insanely high dividend problem. It tidied things up and then it was back to running the script and feeding it to Alpharius to work out any more hurdles.
-The div's came back the normal price (sadly) but some were still missing so it was a case of reminding Alpharius that yes, there was a dividend involved and once it popped over for a look somewhere, it confirmed that was the case.
-For quite a few holdings I had to correct what Alpharius thought they were.

For the record? I have zero experience with python. (I can edit some code but don't ask me to write any for you.)
However it talked me through how to download it, work with Window's PowerShell to output the results which could then be shared with it and off it went from there.
-The first results needed a lot of work. Half my holdings couldn't be found and secondly the way it was written, the dividend returns were in the range of hundred's of percent's which would be impossible.
STAG | Price: $37.97 | PE: 25.483223 | Div Yield: 408.0%
O | Price: $62.79 | PE: 53.21187 | Div Yield: 515.0%
But that div yield is impossible - (Me doing some fine tuning here.)
-I then noticed that every Australian share option that I held was missing but this turned out to be a simple fix with .AX added to every Australian ticker in the python script.
-To fix the script I gave it to Perplexity AI to tidy up and then fix the bizarre insanely high dividend problem. It tidied things up and then it was back to running the script and feeding it to Alpharius to work out any more hurdles.
-The div's came back the normal price (sadly) but some were still missing so it was a case of reminding Alpharius that yes, there was a dividend involved and once it popped over for a look somewhere, it confirmed that was the case.
-For quite a few holdings I had to correct what Alpharius thought they were.
But once corrected, it hasn't confused any more of the names which is great.
AND IN THE END:
If I want Alpharius to look over my meagre collection, I run the python script and paste the results where it then take a few minutes to crunch some number and do it's thing, more than happy to point out that my portfolio does need some work although it does have the occasional highlight:

I've also got it to look up dividend payment dates for me and also to give me the news of the day involving my portfolio. Again this takes a few minutes of thinking time to complete but it does give me a day snapshot of how the market is faring and what this might mean for my picks, both good and bad.

So as a shares analyst/news source - Jan looks to have created a winner!
It does give me plenty to consider though and it is time saving but as good as it is, it While it'll never replace a qualified financial advisor, for my level of investing and at this stage, Alpharius will do just fine for what I hold.
(Remember for investment, it is wise to consult a professional otherwise you might end up losing big time!)
If only it worked downstairs as well as it does here...
INSTALLING JAN ON GARAGE INTELLIGENCE
Installation went just fine and once again it picked it's own onboard Jan model to work with based on the system I was using. Once again it's gone for the same model as my upstairs rig, it's very own: Jan-v3.5-4B-Q4_K_XL

However that's where the fun stopped because I started with a simple hello...and never got an answer. Jan on Garage Intelligence thought on me saying hello for so long, eventually the screen went blank as it kicked into screen protection mode.
AND IN THE END:
If I want Alpharius to look over my meagre collection, I run the python script and paste the results where it then take a few minutes to crunch some number and do it's thing, more than happy to point out that my portfolio does need some work although it does have the occasional highlight:
I've also got it to look up dividend payment dates for me and also to give me the news of the day involving my portfolio. Again this takes a few minutes of thinking time to complete but it does give me a day snapshot of how the market is faring and what this might mean for my picks, both good and bad.
So as a shares analyst/news source - Jan looks to have created a winner!
Has it changed the way I do things? (Also a possibly needed disclaimer.)
While Alpharius is offering up some great news and advice, at this stage I'm more taking it on board rather than 'Well it said to junk all these, so off I go!' In saying that I do have some investing rules every fortnight when it comes to throwing some more money into the shares kitty and now I've factored in increasing my stake in the selections Alpharius has selected to hold, as he puts it: 'These form a solid base for long-term income generation and market exposure with manageable risk.'It does give me plenty to consider though and it is time saving but as good as it is, it While it'll never replace a qualified financial advisor, for my level of investing and at this stage, Alpharius will do just fine for what I hold.
(Remember for investment, it is wise to consult a professional otherwise you might end up losing big time!)
If only it worked downstairs as well as it does here...
INSTALLING JAN ON GARAGE INTELLIGENCE
Installation went just fine and once again it picked it's own onboard Jan model to work with based on the system I was using. Once again it's gone for the same model as my upstairs rig, it's very own: Jan-v3.5-4B-Q4_K_XL
However that's where the fun stopped because I started with a simple hello...and never got an answer. Jan on Garage Intelligence thought on me saying hello for so long, eventually the screen went blank as it kicked into screen protection mode.
A bit of a further investigation in the settings discovered it couldn't see my GPU. So I updated the CUDA drivers (I had to clear out a bit of junk first to make some space) and once again...it couldn't see my GPU. Apparently the k2000 is so old and underpowered, it's not even going to pretend to try and recognize it, let alone use it.
Comments
Post a Comment