@prologic@twtxt.net, are you running Ollama on your Mac Studio? How much RAM does it have? How does it performs with 7b, and 13b models?
@prologic@twtxt.net, are you running Ollama on your Mac Studio? How much RAM does it have? How does it performs with 7b, and 13b models?