PAI

No products available to display.

Support the Creator
PAI's Tip Jar
If you enjoyed this video, consider leaving a tip!
Tip Now Beta
A
@AZisk
My LLM calculator estimator thingie: https://llm-inference-calculator-rki02.kinsta.page/
A
@andre-le-bone-aparte
Apple: 512gb Of Ram For The M3 Ultra
Intel: Laughs In 6TB Of Optane Memory
J
@joachim4568
The old Intel MacPro supported up to 1.5 TB RAM. There is still no replacement for this use case. We are now at a third of it.
W
@whatever1538
1:50 I am always amazed by your wealth 😂
R
@rachadnajjar
Can you please test and load a 70B model on the latest Minisforum AI X1 Pro with 96GB configuration? Thank you
R
@RomPereira
less than 10.000 dollars for a 512 GB desktop capable of running somewhat well a LLM this size... not bad. Same price as an Mac Pro maxed out with 2 TB SSD. The Studio comes with almost 2.5 times the amount of RAM and 30% faster CPU+GPU. Not a bad deal.
A
@amizan8653
that computer fully maxed out costs more than I paid for my car back in 2020
T
@Tigerex966
I wonder if apple has been sitting on these in storage for over year
A
@andre-le-bone-aparte
Question - @4:00 : Do you guys by FOUR 5090s... Or An Apple M3 Ultra - -or wait for it... 16 AMD 7900 XTX for 400gb Of VRAM.
C
@chanm01
So, with 800 GB/s of mem bandwidth it's not going to be faster than a dGPU based solution, but with up to 512 GB of unified mem, it'll also be a hell of a lot cheaper. Of course $10k is still pretty expensive, so if you wanted to go even cheaper you could use a server board and try to run the model in SRAM, but then your token throughput would be super slow. Is the M3 Ultra Mac Studio really the sweet spot for running large models locally?
A
@andre-le-bone-aparte
Apple: 512gb Of Ram For The M3 Ultra
Intel: Laughs In 6TB Of Optane Memory
J
@joachim4568
The old Intel MacPro supported up to 1.5 TB RAM. There is still no replacement for this use case. We are now at a third of it.
W
@whatever1538
1:50 I am always amazed by your wealth 😂
R
@rachadnajjar
Can you please test and load a 70B model on the latest Minisforum AI X1 Pro with 96GB configuration? Thank you
R
@RomPereira
less than 10.000 dollars for a 512 GB desktop capable of running somewhat well a LLM this size... not bad. Same price as an Mac Pro maxed out with 2 TB SSD. The Studio comes with almost 2.5 times the amount of RAM and 30% faster CPU+GPU. Not a bad deal.
A
@amizan8653
that computer fully maxed out costs more than I paid for my car back in 2020
T
@Tigerex966
I wonder if apple has been sitting on these in storage for over year
A
@andre-le-bone-aparte
Question - @4:00 : Do you guys by FOUR 5090s... Or An Apple M3 Ultra - -or wait for it... 16 AMD 7900 XTX for 400gb Of VRAM.
C
@chanm01
So, with 800 GB/s of mem bandwidth it's not going to be faster than a dGPU based solution, but with up to 512 GB of unified mem, it'll also be a hell of a lot cheaper. Of course $10k is still pretty expensive, so if you wanted to go even cheaper you could use a server board and try to run the model in SRAM, but then your token throughput would be super slow. Is the M3 Ultra Mac Studio really the sweet spot for running large models locally?
Shopping Cart
Your cart is empty
Proceed to Checkout