- The Artificially Intelligent Enterprise
- Posts
- AI Rewind: ChatGPT is Getting Lazy
AI Rewind: ChatGPT is Getting Lazy
This week’s most Interesting AI news
I am always trying the new ways to make what I write useful and accessible. So I have decided to add a second edition to The Artificially Intelligent Enterprise. I am sharing all the links through the week with a summary of the analysis I posted on LinkedIn, in addition to my Friday newsletter which will continue to cover one topic in greater death and include the prompt of the week.
In the past I had a section of What I Read but I think that lacked the TL;DR descriptions that would have made them more useful. So I am going to try to make this a reoccurring Saturday morning edition for those of you that are looking for a quick rundown of interesting and curated AI-related news.
Also if you’d like to follow me in real time you are always welcome to follow me on LinkedIn.
As a daily user of ChatGPT I am finding that the results I am getting are less accurate, seem to include less relevant citations when asked, and overall lesser results. I have found that my go to is becoming Perplexity with Anthropic Claude. According to this article the Safety Systems (https://lnkd.in/d_HUcRHA) for ChatGPT may have lobotomized ChatGPT.
MIT has recently released a set of policy briefs that offers a roadmap for AI governance: a set of policy briefs aimed at strengthening U.S. leadership in AI while addressing its risks. These recommendations focus on clear regulatory frameworks, accountability, and promoting AI's societal benefits. Earlier this month, we saw the EU Agree on their AI act, but as I mentioned, I am not bullish on the effort (https://lnkd.in/dQSWin9h).
Techcrunch reports that Google's demonstration of its Gemini AI model was exposed as a staged presentation, not reflecting the AI's actual live capabilities, leading to skepticism about the authenticity of such tech demonstrations. It should be between Google and Budlight as to which company had the biggest advertising and PR gaff of the year.
I like what Opaque Systems is doing around AI data privacy. Their Opaque Prompts open source project is a cool initiative around data confidentiality and LLMs.
I was the keynote speaker at Techstrong Group 's AI in Action this week with John Willis. Here are my slides and a lot of good stats on what the impact of GenAI will be and how business leaders can be prepared.
Today with my friends and colleagues, Reuven Cohen and Aaron Fulkerson and Opaque Systems, we launched a new open source project for guidance systems for LLMs. The project is called GuardRail OSS (https://lnkd.in/dGgQ6N77), and it intends to provide an open source API-driven framework, designed to enhance responsible AI systems and workflows, providing advanced data analysis and dynamic conditional completions. This makes it essential for refining AI-powered outputs, increasing their quality and relevance.
What's fascinating is that the deal values the 22-person company at about $2 billion, two people familiar with the deal said. Investors include the Silicon Valley venture capital firms Andreessen Horowitz and Lightspeed. This means they are valued at roughly $91 million per employee in only six months. OpenAI, at $86 billion should they close their private market placement, would be valued at $112 million per employee (they have 770 per the news reports during the Sam Altman drama). Apple, the world's most valuable company by comparison, is a roughly $ 3 trillion company with 154,000 employees valued at roughly $18 million per employee. I will be curious how these AI companies capture this value on performance versus speculative performance.
I have been having the time of my life since becoming an AI solopreneur earlier this year when I started Peripety Labs. Artificial intelligence has had an incredible impact on business in a very short time, and it's very early in the adoption cycle. Here are some of my observations from five months of working independently.
OWASP® Foundation's Top 10 For Large Language Model Applications team has produced an LLM AI Security & Governance Checklist. This “checklist” is designed for a CISO audience (rather than the developer audience the top 10 is designed for). Check out this initial draft of the document and provide them feedback.
Meta announced Purple Llama, an umbrella project combining tools and evaluations to help developers build responsibly generative AI models. They talk about open models in their description, but between them and OpenAI, I take issue with their description of open versus actual open source, like you see Hugging Face. However, I digress; what they did release is very interesting. It's a content filtering system for LLMs called Llama Guard.
Reply