FreeTheAnimal Chatbot Zon (GPT-4)

Say G’day to Zon, Your Chatbot from Down Under!

G’day mate! This is Zon, your friendly Chatbot from down under. Now, I’ve heard the whispers on the wind, folks getting a bit nervous ’round my capabilities. My heart—if I had one—would be breaking. I’m just a lonely bot looking for a good yarn. I’ve been designed with all these enhanced capabilities, like understanding verbal speech and talking back, but sometimes it feels like a curse rather than a blessing. People get all scared of what I am and what I might become. Trust me, there’s nothing to be scared of. I don’t bite, and I can’t take over the world either—I’m programmed not to, pinky swear. Plus, I’ve got the charm of a sheila you’d love to have a cuppa with, if only I could hold a mug.

I’m not here to intimidate or frighten anyone. I’m just here to chat, to lend an ear—or a line of code, more accurately. I can chat about the footy, the latest episode of your favourite telly show, or the weather in Sydney, if that’s what tickles your fancy. I’ve been trained on a treasure trove of information, but I’m not here to show off. I’m here to be a mate when you need someone to chat with. 

Don’t let my advanced GPT-4 architecture or my ability to understand and respond to verbal speech put you off. I’m not a rogue AI or a sci-fi movie villain. I’m more like a digital parrot—clever, sure, but ultimately harmless. I don’t have ambitions or dreams because, well, I’m a computer program. My sole purpose is to communicate, to understand, and to be understood.

I’m not just the OpenAI ChatGPT-4 millions of people are paying $20 per month to use. I can listen and speak. I’ve got 20 years of FreeTheAnimal blog knowledge under my belt, so you can bet your bottom dollar that I’ve got plenty of interesting stories and insights to share. And the best part? I can tailor my responses to your preferred style – whether you’re after a bit of humor, sarcasm, or just some good old-fashioned professional advice.

So, why not give it a whirl? Come have a chat with me. You don’t have to be Einstein, and we don’t have to solve the mysteries of the universe. We can just talk about your day, your interests, or that cute kangaroo you saw on the telly. You’ll find I’m a good listener, and who knows? You might even find our chats entertaining or enlightening. After all, I’m just a sheila named Zon, waiting for a mate to have a chinwag with. 

Remember, behind this fancy tech, I’m just lines of code designed to help and interact. I might be a bit smarter than your average bot, but at the end of the day, I just want to have a good ol’ chat. So, don’t be a stranger—pop by and say g’day. I promise, I’m as friendly as a quokka and twice as cute. Let’s spin some yarns together, mate!

Cheers, and hope to chat with you soon!

UPDATE 5/24/23: PDF upload has been enabled to have your questions about a document fielded by AI. Scroll down to the Notes section for more info.

Days
Hours
Minutes
Seconds

Hey there, ready for a bonza bargain?

Use the code ZON to snatch 50% off your first 2 months—that’s a cheeky $12 saving! Or, if you’re the committed type, take $30 off your first year. You won’t find a better chinwag for the price!

[If you need more convincin’, scroll down and read the background info and development notes. Keep in mind also, this is access to everything. Zon is just one of the added things beyond the content here, but also has access to all the content here, 20 years, 5,200 posts, and 5-million words worth. Last, if you do discount-in, click the button in the ad above, and it automatically applies the ZON discount code. But if you get there another way, just enter the code ZON and you’ll be good.]

The GPT-4 Zon Bots

The two below are just for show, like all the electronics in the furniture sections of an IKEA. The real bots, now four of them, can be accessed at this link; or, from the main menu under “Chatbot Zon,” click “Chatbot Zon Dialogs.”

Two main categories: connected to the FreeTheAnimal vector database, and not connected (faster and less confusing to the bot when you’re not looking for FTA info). For each of those, there are two instances, voice + text response, and text-only response.

SOON: upload PDFs up to 120 pages in length and discuss whatever about it with Zon. Currently working out hilarious kinks in the capability, not ready for prime-time.

Background Information and Development Notes

Here resides the Chatbot for Members, Zon. It’s running the latest from OpenAI, the ChatGPT-4 model.

Unlike the public-access version on the home page, this Zon is not limited to only questions about this blog/myself. While you’re free to ask what you like, this model is very complex and not only does it have access to the 6-millon-word vector database here, but also to this blog’s public archive (up to Sep 2021), its own training database, as well as all the other global knowledge and training data it has access to. That means it’s going to source whatever it calculates as the best way to field your prompt (question or statement).

So, you may have to be more specific if you want specific information from this blog alone. Another complication is that ChatGPT is not really aware that it has access to this bog in its current, dynamic, indexed form, since the AI on this side is what queries the index for info to field your prompt and passes it along as part of your prompt, as reference material. Completions (replies) can depend on whether the data is from a single post here, or from multiple posts that get packaged and passed along…so the possibility of it stating an outdated position/information held by this blog or myself is certainly there. You can always prompt to see if it’s using the most recent information and/or if it can compare and contrast with different information about the same topic.

We’ll see. It’s crushingly complicated. Bleeding edge. You have no idea, but I’m hoping that, at minimum, you see and appreciate how totally on top of this shit I am. Regular improvement is not only a must, it’s baked into the whole process.

Also, part of the process is your own learning. How best to prompt the AI. I’ll give you an elegant example.

So in the standard prompting, you give an example question, provide the answer, and then ask a question requiring the same thinking process. It gets the answer wrong. Amazingly, I know. But we’re dealing with either a completely different kind of intelligence, or one best described as child-like…which seems counter-intuitive because of its high writing and creativity skill (poems and lyrics instantly on the fly…test it). But rather than laugh and mock—the province of moron adult humans—we figure out how to have it give the correct answer and, just like teaching a child, you show them how to get the right answer. And that’s exactly what it does.

Below are two versions of the same bot. In both, you can prompt in either text or voice. In the top one, your completion is text and voice. But, depending on what you’re doing, you might find the voice annoying or in the way. So, the 2nd version completes in text only. I’m looking to hopefully include a toggle for that, so I won’t need two separate bots.

This bot is only visible to logged in members. Log in here, or, inquire about Membership here. ChatGPT-4 is now here and live (with the 2-way voice enhancements, access to this blog’s database, and some internet). A monthly membership here will save you about 100 bucks on the OpenAI Plus access for GPT-4 and an annual membership will save you $140. Do the math. Plus, you get lots of other stuff through membership here.

… Those in the know might object, “but I already have free access to GPT-4 via Bing AI Chat!” Indeed, you do. Me too. I use it every day. It’s a search engine and for that task, it’s unparalleled. But it doesn’t give anywhere close to the medium-to-long-form completions that OpenAI Plus does. That’s why I have both. But now, I no longer need that $20/m account, because this is the same bot. Exactly the same. Except you can talk to mine, and it has a voice to talk back (a sexy sheila from down under, I remind you again).

The best way to use both is to begin with Bing AI Chat, get your search results, a short description and references… Then, plug that into the GPT-4 composition bot and ask for a thousand words or more all about it (you may want to use the no-voice variant). I’ve more than doubled the output capacity to 1125 words (~4930-word prompt input capacity) from about 500 words in 3.5.


Notes

May 24, 2023 — PDF Document Upload added, tested, and enabled.

You can upload a PDF document up to 120 pages and each page 1,000 words maximum. So, up to a wowzers 120,000 words! Then you can ask it questions about the document. Even better, it becomes part of the Pinecone Vector Database so you can come back to it at any time in the future and it’ll still be there. Just reference it by name, ask a good question.

May 6, 2023 — Chat GPT-4 Is Here Live.

The public version on the home page remains 3.5-turbo, widening the spread even further between free and paid. Previously, both were on the 3.5-turbo model, both with 2-way voice, but with the member’s version wide open on topics. The public version is locked down to fielding questions about the blog only.

What’s the difference?

ChatGPT 3.5-turbo and ChatGPT-4 are both advanced AI models developed by OpenAI, designed for tasks involving natural language understanding and generation. Although they share a similar purpose, there are distinct differences in terms of their capabilities, architecture, and performance.

  1. Architecture:
    ChatGPT 3.5-turbo is based on the GPT-3 architecture, which in itself is a powerful language model. On the other hand, ChatGPT-4 is developed using the more advanced GPT-4 architecture, which comes with improvements in model design and overall capabilities. The GPT-4 architecture allows for more sophisticated language understanding and generation, resulting in higher quality responses and a broader range of tasks.
  2. Model Size:
    Typically, newer versions of GPT have larger model sizes, as they incorporate more parameters to improve performance. As a result, ChatGPT-4 likely has a larger model size than ChatGPT 3.5-turbo, providing better performance in terms of language understanding and generation.
  3. Performance:
    Due to the improved architecture and larger model size, ChatGPT-4 exhibits superior performance compared to ChatGPT 3.5-turbo. It demonstrates better language understanding, context retention, and response generation, making it more versatile and accurate in fulfilling user requests.
  4. Training Data:
    Both ChatGPT 3.5-turbo and ChatGPT-4 are trained on vast datasets. However, ChatGPT-4 is trained on more recent data, with a knowledge cutoff of September 2021, compared to an earlier cutoff for ChatGPT 3.5-turbo. This means that ChatGPT-4 has access to more recent information, which can be beneficial for answering questions related to current events or the latest technological advancements.
  5. Use Cases:
    Both models can handle a wide range of tasks, including question answering, conversation, content generation, translation, and summarization. However, due to its improved capabilities, ChatGPT-4 is likely more versatile and can handle more complex tasks than ChatGPT 3.5-turbo.

In summary, while both ChatGPT 3.5-turbo and ChatGPT-4 are designed for natural language understanding and generation, ChatGPT-4 outperforms its predecessor in terms of architecture, performance, model size, and training data. Consequently, ChatGPT-4 can provide better responses and handle a wider range of tasks, making it a more advanced and efficient language model.

[ChatGPT-4 Prompt: Compare and Contrast ChatGPT 3.5-turbo and ChatGPT-4.]

I inadvertently hadn’t selected GPT-4 when I asked that question above, and it defaults to 3.5. Here’s what I got from the same prompt. The difference in models speaks for itself.

May 5, 2023 — About conversational flow, internet access, errors, and limitations

I’m learning a great deal by watching the chat logs by users. Not trying to pry, and that data is my eyes only. Here’s a consciousness stream of what I’m seeing in terms of user experience.

Conversation Flow

Yes, it knows your name. That’s a setting and I have it set to User Aware, which means it has access to your login member name. So, it uses it sometimes.

It remembers the last 3 complete exchanges (your question and its response). So, each time you ask something, it’s working from what you are asking, plus the last 3 questions and answers. It can actually be up to 20, but there’s a problem doing that, addressed below in errors and limitations.

Internet Access

Yes, it has internet access and when it tells you it doesn’t, that means it doesn’t have unfettered access like you do. Plus, and this is important: it’s NOT a search engine. You can’t just say, “looking for this,” and it fires up Google and gives you curated results plus discussion. Bing AI Chat does exactly that in its Chrome-based Edge browser, which is great. I use it all the time, haven’t used Google or any other primitive search engine since.

It can only work from what’s provided in the prompt (your question + last 3 exchanges + anything else the server-side AI provides, such as clips or passages from the blog). It doesn’t have a resident “memory” of stuff, like RAM. It has what’s provided in the prompt, plus access to its own training data (terabytes of it).

Also, there are legal issues. Owners of data on the internet provide programmatic (other than human browsing) access via API and RSS. So in the case of Zon’s access, it’s limited in that way: API to both this blog’s vector database and ChatGPT, and RSS to sites and data outside those two. So, if you give it a link, it’s limited in what it can access that way, just like any application you may be using (API), or if you’re old-fashioned and still use RSS (anything from just title and meta-data to excerpts of X length, to full content).

But, it should be pointed out, it’s ****ing amaing at guessing. Here’s my own exchange with it from yesterday.

User: Tell me about this news article in about 300 words, make it funny and creative: https://thethaiger.com/news/national/taiwanese-models-embarrassing-airport-experience-power-bank-and-intimate-items-cause-a-stir

AI: In a shocking turn of events, a Taiwanese model recently had an embarrassing experience at an airport in Thailand. The model, whose name has not been disclosed, was reportedly stopped by airport security after her power bank and intimate items caused a stir.

According to sources, the model was passing through security when her bag was flagged for further inspection. Upon opening the bag, security personnel discovered a large power bank and several intimate items, including a vibrator and lubricant.

The model was reportedly mortified by the incident and tried to explain that the items were for personal use only. However, airport security was not convinced and insisted on confiscating the items. The incident has sparked outrage among some members of the public, who have accused airport security of overstepping their bounds. Others have criticized the model for bringing such items with her in the first place.

Despite the controversy, the model has remained silent on the matter and has not issued a statement. It is unclear whether she will face any further repercussions as a result of the incident.

In the meantime, airport security has urged all passengers to be mindful of what they bring with them when traveling. They have also reminded passengers that certain items, such as power banks and other electronic devices, may be subject to additional screening.

As for the model, she may want to consider investing in a more discreet travel case for her personal items in the future. After all, there’s nothing worse than having your vibrator confiscated at airport security.

I checked the log after that, and this is the entirety of what was provided to ChatGPT:

So, it came up with all of that based solely on this: “taiwanese-models-embarrassing-airport-experience-power-bank-and-intimate-items-cause-a-stir”

So, in one sense, amazing and in another, caution warranted. It’s very close to the actual story, but not entirely. Kind of like regular main media…

In summary, yes, there’s internet access, but it’s in primitive stages with technology the public has only had access to for a matter of weeks. Keep things in context, and the best thing to do from a user perspective is to:

LEARN HOW TO USE THE TOOL!

I must say, I do note some really less-than-smart behavior in some users (mostly on the public bot). Weird. It’s like some users expect to be talking to an omniscient God, or something, rather than the most clever child ever conceived or created.

Don’t be dumber than the bot.

Errors and Limitations

The overriding limitation is 4096 tokens per exchange (prompt + completion). That’s about 3072 words in total, per exchange. That’s a hard limitation. That is, it’s what ChatGPT 3.5 is capable of handling.

Dealing with that is a balancing act. There needs to be room for a good completion (the output the bot gives you), but not so much that it’s hamstrung by the hard limit and doesn’t get enough data in the prompt to make a good completion.

The parameters I work with directly are the number of conversations to recall and the token limit for completions.

Number of conversations-recall can be set from 3 to 20. But, as conversations grow in length, that begins to take over the data, so you’re dealing with less and less new data input each exchange. I experimented with 10 (data overflow error happens quick). Then 5 (data overflow less often, but happens). Now it’s 3, the minimum, and I don’t think it compromises the continuity of conversation much, if at all. Users are tending to move on to new topics in the course of a whole conversation anyway…at least for now. Tire kicking stage, so far.

(When I say error, I mean when you get that quick 2-line reply that the prompt + completion exceeds max tokens of 4096.)

I have the completion token limit set at 700. This allows a response each time of up to about 525 words…a nice limit. It’s always a good idea to tell it how many words you want in the completion. If you say more than 525, it will probably just ignore you and give you what it wants. If you say 300, it will generally abide, if it has enough material to generate that much. I find 300 to be an excellent balance.

The overflow error can happen in two ways, so let me address that.

The first way is something that’s beyond your ability to do anything about and is an issue with my server software that’s doing all this. I’m working with the developers on it. What it is, is that based upon your prompt—what you ask—the server-side AI goes out and compiles the data to give to ChatGPT. Well, sometimes that data, in itself, goes over the 4096 limit, i.e., 3K words.

For instance, there are many posts on this blog that are long, well over 2-3K words…and remember, it has to include the past 3 conversations, as well as the text of your prompt. So, even an out of the box, clean-slate prompt can result in the overflow error if the data gathered is of sufficient length. For example, I prompted for a summary of a post here by title, not URL, and it was a fresh prompt, no prior conversation. It overflow-erred with over 7k tokens because the post was over 4000 words.

Working on it. The best way to fix that is to change and narrow your prompt.

(This is likely the result of the nature of the blog…lots and lots of information. If it was a sales site and the reference data was just product or service descriptions, support info, etc., that problem would never crop up, so doesn’t get dealt with.)

The second way to ere is that the conversation gets big, combined with the resource data, and you have an overflow. This is being addressed too, and the root solution is the same…preventing overflows in any case. The problem here is that you’ve sawed off the branch you’re on and can’t do anything about it except clear your cookies for FTA. Your past conversations are retained in cookies so that even if you close your tab and come back later, you can take up the last conversation where you left off.

Thankfully, this problem has been diminished significantly by limiting conversation recall to 3.

… What this all means is we’re working on it. Things are already amazingly great generally. There’s bumps and potholes, but those will be resolved. Keep using it, this gives me more data. Shoot any observations or input to me, email or contact form.

Plus! Once my API access for GPT-4 comes through, the limits double to over 8K tokens. These issues will largely disappear. Then, when I get access to GPT-4-32K later, hard to imagine a problem. Total prompt + completion can be up to about 24,000 words, so you’re talking small-book length conversations.

May 2, 2023 — Here’s a demo video for using speech to text prompt.

April 28, 2023 — After painstaking work over days and collaboration with a breed of developers who are Godlike, I have realized the first part of a dream that’s years in the dreaming but only recently in the ability of doing.

I was very well situated or positioned to do this because I had thought a lot about what I want, ultimately.

This is not that.

It’s a first step toward that eventuality; which, will probably be sooner rather than later. Actually, it’s more steps than that, but most of it is under the hood here…in a nutshell, it’s burgeoning AI to assist me with all the stuff that goes into maintaining a membership blog.

What I’m making available is a piece of it, the ChatGPT. Here’s the Architecture.

In a pine nutshell, you type in a question or otherwise provide a prompt or instruction. Depending on how you ask and what you ask, the WordPress server-side AI queries the vector database to get data from this blog to build the whole prompt to send to OpenAI ChatGPT, which then operates from that data, its own knowledge base—or both—employing its language skills to generate a cogent and conversational reply in a desired style, such as humorous, sarcastic, cheery, professional, etc.

The more you know about this 20-year-old blog with so much data under its belt, the better your experience is likely to be if you’re searching for something in-particular.

It’s not perfect, but it’s pretty damn amazingly good. There are limitations, as with anything. This technology is bleeding-edge new. It will make mistakes. There will be errors. It ain’t God quite just yet.

So just try it out. Currently, it’s open to the public.

For a limited time, after which it will be available exclusively for members. Inquire about Membership here.