cross-posted from: https://lemmit.online/post/4242386
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/pcmasterrace by /u/trander6face on 2024-10-24 11:11:47+00:00.
cross-posted from: https://lemmit.online/post/4242386
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/pcmasterrace by /u/trander6face on 2024-10-24 11:11:47+00:00.
i rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).