Fine-Tuning AI Foundation Models with your Data

Will "Weak AI" Solve some of the world's toughest problems?

[The image above is generated by Midjourney. The prompt I used to create the image is listed at the end of this email.]

Artificial intelligence has a fascinating dichotomy between general and specialized models. While Large Language Models (LLMs) like ChatGPT have been making waves with their ability to handle a wide array of tasks, there's a growing recognition of the unique value that specialized AI models bring. These models, often referred to as narrow or weak AI, are meticulously fine-tuned on specific data, enabling them to excel in performing particular tasks with remarkable efficiency and accuracy.

Hugging Face is a premier destination for anyone seeking to explore and utilize large language models. Its comprehensive platform hosts a library of pre-trained models, including earlier available ones like BERT, GPT-2, RoBERTa, and T5, along with other recently available models like LLaMA, Falcon, and FLAN-2. These foundation models can be trained on diverse datasets and can perform a wide array of tasks, from text generation to sentiment analysis, translation, and more.

Here’s an example of where model training comes into play. An LLM like GPT-4 might pass the medical boards, and because that’s part of the corpus of data, it was trained on or at least most of the data. In contrast, a computer vision model may be able to determine different breeds of dogs from their pictures but may not be able to decide on a patient’s condition based on an X-ray. This is because it hasn’t been trained on data to have image recognition capabilities.

One good example is a study published in Radiology that found that an artificial intelligence (AI) tool, ChestLink version 2.6, demonstrated a 99.1% sensitivity rate in detecting abnormal chest X-rays, nearly 27% higher than traditional radiologist reports. The AI tool also showed the potential to reduce radiologists' workload by correctly identifying normal X-rays. However, radiologists had a higher specificity rate for abnormal X-rays than the AI tool. This is because the specialized model has been trained on a narrower, more relevant dataset, allowing it to make more accurate predictions. And while a radiologist may see tens of thousands of X-rays in their career. A.I. could be trained on the same amount of data on day one, almost instantly recall that amount of data, and make comparisons to match patterns with a high probability of success.

Researchers at the University of Edinburgh say a trio of chemicals that target faulty cells linked to a range of age-related conditions was found using artificial intelligence, which is hundreds of times cheaper than standard screening methods.

Findings suggest the drugs can safely remove defective cells – senescent cells – linked to conditions including cancer, Alzheimer’s disease, and declining eyesight and mobility. This is the exciting stuff that AI can bring, not just deep fakes of the pope in a designer jacket.

I collaborated with HumanSignal, a company that makes data labeling solutions, on this ebook, The Essential Guide to LLM Fine-Tuning. The book is a good overview of how one might begin training a foundation model.[This is not an advertisement, just something I worked on and think is educational - The guide is free but requires registration.]

Narrow AI Models for Business Use Cases

We will surely see several services backed by LLMs specially trained to complete specialized tasks or have domain expertise much deeper than Google Bard and ChatGPT. The biggest barrier for these general-use Large Language Models isn’t capability. It’s training data.

Training data is the specialized knowledge needed to help educate the model on the context of a certain domain. It’s more of an analogy than anything; it’s not that these models are like humans that learn from experience; they take data and convert it into a format that allows them to assign probabilities and use complex algorithms to interact with inputs like chat or data sets. They then adjust the weights of those probabilities to come up with answers to our questions or render images based on text inputs.

Finance

Bloomberg and Johns Hopkins University have presented BloombergGPT, a 50 billion parameter language model trained on a 700 billion token dataset that significantly outperforms current benchmark models on financial tasks. The model is trained on various financial data and is augmented with 345 billion tokens from general-purpose datasets. It is purpose-built from scratch for finance and has achieved best-in-class results on financial benchmarks.Bloomberg researchers pioneered a mixed approach that combines finance data with general-purpose datasets to train a model that achieves best-in-class financial results while maintaining competitive performance on general-purpose LLM benchmarks. So perhaps this is more a cause of augmenting a general-use AI with specialized data. The point remains the same; the idea is that it will have a high degree of capabilities for a particularly complex domain.

JP Morgan has developed a COiN (Contract Intelligence) program using a language model to review legal documents and extract important data points and clauses. This tool was trained on interpreting commercial loan agreements, which traditionally required 360,000 hours of work by lawyers and loan officers each year.

OpenNyAI is an initiative aimed at advancing AI for Justice in India. They are developing A.I. public goods, such as models and APIs, and fostering a community of lawyers and technologists to transform the justice experience in India. The community builds datasets, models, and educational materials and advocates for better design and ethics for A.I. for Justice solutions.

OpenNyAI Labs is a part of the initiative that encourages makers to harness the power of A.I., such as Large Language Models (LLMs), to build justice solutions. The industry provides resources, peer support, cloud credits, and more to accelerate the development of AI for Justice solutions.

OpenNyAI is an open source initiative, meaning its outputs, including the generated dataset, trained models, benchmarks, and other intellectual works, will be kept in the public domain for anyone to use freely. The initiative operates under the principles of being open, collaborative, transparent, and inclusive.

They have developed reference solutions like Jugalbandi, a free and open platform that combines the power of ChatGPT and Indian language translation models to power conversational AI solutions in any domain. Another reference solution is the AI-assisted Judgement Explorer, which showcases the AI capabilities of the open AI models developed thus far.

Video Production

Runway is a company specializing in applied AI research focusing on enhancing creativity in art and entertainment. The company's primary objective is to develop multimodal AI systems. They have created a suite of tools, collectively known as "AI Magic Tools," which includes Gen-1 and Gen-2 generative AI models, text-to-image and image-to-image tools, frame interpolation, and Custom AI Training.

These tools are designed to enable new forms of creative expression and have potential applications across various industries. For instance, global brands and innovative enterprises could use these tools to generate unique and engaging content for their audiences.

For example, the Gen-1 and Gen-2 generative AI models could create commercials or promotional videos from simple text inputs. This could streamline the content creation process and allow for rapid prototyping of video concepts. Similarly, the Text to Image and Image to Image tools could be used to create custom graphics or transform existing images for use in promotional materials. For the Oscar-winning film "Everything Everywhere All at Once," VFX artist Evan Halleck used Runway's Green Screen Background Remover for one of the "moving rocks" scenes.

Tip of the Week: Organizing your AI Prompts

Crafting a good prompt is part of getting good results from your favorite chatbot. I have found that when I get a good result, I save that prompt for easy access for future use. Often I tweak those prompts as I use them to make them better. I call these my SuperPrompts (I gave away my Superprompts as a PDF in AI Prompt Engineering, Vocation or Required Business Skill?).

I like to keep them in Notion databases for later use, or I’ll use them to create new specialized prompts. You can also create a macro out of the prompt to recall it. See the week's tip in, Is AI Coming for your Job? to see how.

Also, when you save the prompt, I suggest keeping it with the fields that you want to replace in the future with a variable in brackets. In the prompt below, the variable is:

[my business for creating presentations]

In the future, I may replace that with personal finance and budget or some other need.

ChatGPT Prompt for Finding the Best AI Applications

I want to find the best AI applications that I can use today for [my business for creating presentations]- Search the following websites: [ProductHunt.com](http://producthunt.com/) , [AIScout.net](http://aiscout.net/), [Futurepedia.io](http://futurepedia.io/), https://www.insidr.ai/ai-tools/, [futuretools.io/](https://www.futuretools.io/), [G2.com](http://G2.com) , [Capterra.com](http://Capterra.com)- Recommend those applications with the highest ratings and share their rating in the lists- If there are no applications that meet my criteria then share those without ratings

What I Read this Week

What I Listened to this Week

A.I. Tools I am Evaluating

  • Placid.app - Your creative vision at scale: Automate the production of your marketing visuals with our creative automation toolkit. Think of it as Canva powered by ChatGPT (though it doesn’t seem to use either of those products).

  • Levity - Levity In just a few minutes, build, test, and connect AI that automates manual tasks.

  • Reword - Write articles that perform with an “editor that thinks”.

  • Runway’s AI Magic Tools - 30+ tools for real-time video editing, collaboration more. This is a lot to digest, so it might be some time before you can digest all the capabilities.

Midjourney Prompt for Newsletter Header Image

For every issue of the Artificially Intelligent Enterprise, I include the MIdjourney prompt I used to create this edition.

Environmental Portrait of a specific use case for a fine-tuned language model, set in a busy hospital. The image captures a doctor using the AI for medical research, a wide shot showing the integration of AI in healthcare. Photographed by Geoffrey Hinton, the photo highlights the life-saving potential of AI, stirring feelings of hope and admiration. The clinical white tones and the sharp focus on the doctor add a sense of urgency to the scene --s 1000 --ar 16:9

Reply

or to participate.