minus-squareDyf_Tfh@lemmy.sdf.orgOPtoTechnology@lemmy.world•Hello GPT-4olinkfedilinkEnglisharrow-up0·7 months agoIf you already didn’t know, you can run locally some small models with an entry level GPU. For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful. Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight” linkfedilink
Dyf_Tfh@lemmy.sdf.org to Technology@lemmy.worldEnglish · 7 months agoHello GPT-4oplus-squareopenai.comexternal-linkmessage-square43fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkHello GPT-4oplus-squareopenai.comDyf_Tfh@lemmy.sdf.org to Technology@lemmy.worldEnglish · 7 months agomessage-square43fedilink
If you already didn’t know, you can run locally some small models with an entry level GPU.
For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.
Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”