Large language models are a powerful new primitive for building software. But since they are so new—and behave so differently from normal computing resources—it’s not always obvious how to use them.
https://a16z.com/2023/06/20/emerging-architectures-for-llm-applications/
In this post, we’re sharing a reference architecture for the emerging LLM app stack. It shows the most common systems, tools, and design patterns we’ve seen used by AI startups and sophisticated tech companies. This stack is still very early and may change substantially as the underlying technology advances, but we hope it will be a useful reference for developers working with LLMs now.
Github :: free-response-scoring by David Colarusso
This repository shares code used to implement the methods described in Unsupervised Machine Scoring of Free Response Answers—Validated Against Law School Final Exams, presented at the Computational Legal Studies Conference, March 2022, hosted by the Center for Computational Law at Singapore Management University.
You can find links to all relevant content either in, or linked to from, the notebook titled Score Exams.
Good alternative to LLM text comparison. Note: patent pending Suffolk University
Github :: Flowise – Drag & drop UI to build your customized LLM flow using LangchainJS
Code 0n Github: https://github.com/FlowiseAI/Flowise
Reddit :: Tutorial – train your own llama.cpp mini-ggml-model from scratch!
Tutorial – train your own llama.cpp mini-ggml-model from scratch!
byu/Evening_Ad6637 inLocalLLaMA
Here I show how to train with llama.cpp your mini ggml model from scratch! these are currently very small models (20 mb when quantized) and I think this is more fore educational reasons (it helped me a lot to understand much more, when “create” an own model from.. nothing before. And it helps to understand the parameters and their effects much better)
Otherwise, these mini models could be good enough to be experts on very specific fields, like: only gives text in the style of someone. Like one model could speak like cartman from southpark, another could be a poem and you could implement these ‘person’ in your general chat or role play coversations as supporting roles or minor roles.. to make “group” chats, brainstormings, etc.
And: the discussions on github seems to be very promissing that we will soon be able to fine tune pre-trained big models like llama or vicuna and so on. espcially creating (q)lora adapters should be possible soon : )
this will be the next game changer i think (imagine your model could be finetuned in real time incrementally on top of its lora adapter and with your current conversation as the dataset – what awesome implications would this mean?)
EDIT:
You maybe need the training-script
— Tutorial – train your own llama.cpp mini-ggml-model from scratch!
5 Most Valuable Ways To Convert Unstructured Text To Structured Data | Width.ai
Here’s 5 of the most valuable ways to convert unstructured text to structured data with natural language processing
Source: 5 Most Valuable Ways To Convert Unstructured Text To Structured Data | Width.ai
From Medium :: Run Very Large Language Models on Your Computer | by Benjamin Marie | Towards AI
New large language models are publicly released almost every month. They are getting better and larger.
You may assume that these models can only be run on big clusters or in the cloud.
Fortunately, this is not the case. Recent versions of PyTorch propose several mechanisms that make the use of large language models relatively easy on a standard computer and without much engineering, thanks to the Hugging Face Accelerate package.
Source: Run Very Large Language Models on Your Computer | by Benjamin Marie | Towards AI
From Medium :: Mastering AI Summarization: Your Ultimate Productivity Hack
Unlock Your Second Brain with Streamlit and Hugging Face’s Free LLM Summarization: build a Python Webapp running on your PC.
Source: Mastering AI Summarization: Your Ultimate Productivity Hack
This uses a smaller language model tailored to text summarization. Maybe a good path for assessing student short answers and essays.
Customizing GPT-3 for Your Application :: OpenAI
Developers can now fine-tune GPT-3 on their own data, creating a custom version tailored to their application. Customizing makes GPT-3 reliable for a wider variety of use cases and makes running the model cheaper and faster.
You can use an existing dataset of virtually any shape and size, or incrementally add data based on user feedback. With fine-tuning, one API customer was able to increase correct outputs from 83% to 95%. By adding new data from their product each week, another reduced error rates by 50%.
Social annotation tools help students read together
Now, a new study offers evidence supporting what […] has long observed: online social annotation helps students understand and construct knowledge around scholarly content, while at the same time building community.
https://www.insidehighered.com/news/2022/10/12/social-annotation-technology-helps-students-read-together
There is room for the use of social annotation tools in legal education especially as more teaching resources move online. Tools like Hypothesis would provide law students with ways to highlight and annotate online materials and share those as annotations with study groups, peers, and teachers.
Hypothesis is available in CALI Lawbooks, the online publishing platform that for CALI members
Canonical Launches Free Ubuntu Pro Subscriptions for Everyone – 9to5Linux
Canonical will provide free Ubuntu Pro on up to 5 machines for personal and small business uses. This provides additional security and support to extend the life of LTS releases. Details at https://9to5linux.com/canonical-launches-free-ubuntu-pro-subscriptions-for-everyone