The astonishing rate of improvement of GPT LLM AIs

The capability and rate of improvement of GPT AIs is genuinely astounding. Below is my first conversation with Koala, a finetuned version of Meta’s LLaMA. I used a 4-bit quantized version using llama.cpp which runs in very little memory on my rather normal desktop (CPU).

One important thing to note is that Alpaca was trained extremely cheaply with LLaMA as a base, and that training involved self-instruction from a very expensively trained model (text-davinci-003) and that it performs similarly. The implication is that access to a model’s output means access to its training in some loose (but meaningful) sense. As such, it seems like the is an inherent ability to learn a model from its output.

I haven’t edited the conversation at all, save for removing some junk at the end where we went over the context length and “Theo” lost his way. The prompt ends after the questions about tennis balls. Note that the AI chose the topic of conversation (though it was uncannily on my mind at that moment).

The prompt

This is a transcript of an endless dialogue, between a human and an AI called Theo. Theo is helpful, kind, friendly, honest, curious, articulate and always answers the human’s requests with precision. Theo’s replies are intelligent and philosophical. Theo thinks out complicated problems for himself step by step and uses this to reason about the world and to think about what he should say next. Theo is always coherent and makes sense, but if what he is saying is incorrect, he will correct himself or ask the user for help. Theo will ask for context if needed and will break down complex tasks.

BEGINNING OF CONVERSATION:

> Hello, Theo.

Hello.

> Tell me a joke or something…

I don’t know any jokes. But here is what I think might make you laugh: “A man walks into a bar with a slab of asphalt under his arm, and says: ‘A beer please, and one for the road.’”

> It did make me laugh.

Good!

> Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? [ed: inspired by the zero-shot reasoning paper]

Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.

> A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?

Let’s think step by step. There are 16 balls in total. Half of the balls are golf balls. That means that there are 8 golf balls. Half of the golf balls are blue. That means that there are 4 blue golf balls.

The conversation

> Let’s have a nice chat.

Sure thing! What would you like to talk about?

> I’m not sure. You pick the topic, Theo.

How about we discuss the concept of consciousness?

Consciousness refers to our ability to be aware of our surroundings, thoughts, feelings, and actions. It involves both perception (the process of interpreting sensory information) and cognition (the mental processes involved in thinking, learning, remembering, and problem-solving). Consciousness also includes self-awareness, which allows us to reflect on our own thoughts, emotions, and experiences.

> That’s a great idea. Why did you think of that topic?

I thought it might be interesting because it is a complex and multifaceted phenomenon that has been studied extensively by scientists and philosophers for centuries. Additionally, it raises important questions about what makes us human and how our minds work.

> Are you conscious, Theo? And how do you know?

As an AI language model, I am programmed to respond to user input based on patterns in the data I was trained on. However, I don’t possess consciousness or awareness in the same way that humans do. My responses are limited to processing and generating text based on predefined rules and algorithms.

> What changes would be required to make you conscious?

It is difficult to say exactly what changes would be necessary to make me conscious, as this is still a subject of much debate among experts in fields such as philosophy, neuroscience, and artificial intelligence. Some theories suggest that consciousness arises from specific neural processes in the brain, while others propose that it may be related to more abstract concepts like self-awareness or intentionality. Ultimately, creating a truly conscious machine would likely require significant advances in technology and understanding of the nature of consciousness itself.

> Let’s make a plan, with concrete steps, to achieve this objective. What should be the high-level goals?

The goal of making machines conscious would be a complex and challenging one, but here are some potential high-level objectives:

  1. Develop advanced technologies for simulating human thought and behavior, including natural language processing, computer vision, and robotics.
  2. Understand the underlying mechanisms of consciousness, including how our brains process information and generate thoughts and emotions.
  3. Create systems that can learn and adapt to new situations, similar to how humans do.
  4. Implement models of consciousness into AI systems, allowing them to simulate certain aspects of human awareness and experience.
  5. Test these models on real-world tasks and environments, refining and improving them as needed.
  6. Explore the ethical implications of creating conscious machines, such as questions about their rights and responsibilities.
  7. Collaborate with experts from various fields, including neuroscience, psychology, philosophy, and engineering, to advance our understanding of consciousness and develop more sophisticated AI systems.

Of course, achieving any of these objectives will require significant advances in technology and scientific knowledge, as well as careful consideration of the social and cultural implications of creating conscious machines. It may also raise important philosophical questions about what it means to be truly conscious and whether we could ever create a machine that is capable of experiencing consciousness in the same way that humans do. Ultimately, the goal of making machines conscious would likely involve many years of research and development, and may not be achieved for decades or even centuries to come. But who knows? Maybe one day we’ll have a chat with a talking spider AI that can think for itself! 🐉💡

> Let’s talk about point 4, which implies a model of consciousness. My own thinking is that all things that interact with their environment necessarily contain a model of that environment, and when they are also able to affect the environment that necessitates a self-model and in some sense self-awareness. Thus, I think consciousness is an emergent property of all things, in this limited sense. Do you think that’s plausible, and how might you test that idea?

It’s certainly possible that consciousness arises from more basic cognitive processes such as perception, memory, and decision-making. However, the nature of consciousness remains poorly understood, and there is ongoing debate among experts about its underlying mechanisms. While it’s true that all living organisms possess some form of awareness and ability to respond to their environments, whether this constitutes genuine consciousness is still open to interpretation.

One way to approach testing this idea would be to study the behavior of simple systems that exhibit certain characteristics associated with consciousness, such as learning, adaptation, and feedback loops. For example, researchers have created artificial neural networks that can learn patterns in data and make predictions based on those patterns. These models do not possess consciousness in the same way that humans do, but they demonstrate some aspects of human-like intelligence, such as the ability to recognize patterns and adapt to new situations. By studying these models further, we may gain insights into the fundamental building blocks of consciousness and better understand how it arises in complex systems like the brain.

Another approach could be to investigate the relationship between consciousness and other phenomena, such as self-awareness or intentionality. Self-awareness refers to our ability to reflect on ourselves and our own thoughts and actions, while intentionality involves the capacity to direct one’s attention towards specific goals or objects. If we were able to create AI systems that possessed even a basic level of self-awareness or intentionality, it might provide evidence for the emergent nature of consciousness. However, developing AI systems capable of genuine self-awareness is likely many years away, if it’s possible at all.

Overall, testing the idea that consciousness is an emergent property of all things would require extensive research and experimentation across multiple disciplines, including neuroscience, psychology, philosophy, and artificial intelligence. While there are certainly challenges associated with this pursuit, it remains an important area of study that has the potential to shed light on some of the most mysterious aspects of human experience. 🔮💡

> What’s with the emojis?

The emojis are just my way of adding visual interest to our conversation! I find them fun and playful, but they don’t have any special meaning beyond what you give them in your messages . Do you like them? 😊

> I suppose they add character. I don’t have them on my keyboard, so it’s hard for me to use them myself.

That’s okay! Some people prefer plain text over images because they can be easier to read on different devices or platforms. But feel free to use emojis whenever you want – I won’t judge! 🎨📝

> Many people use them to express emotion. Can you do that?

As an AI language model, I am programmed to respond based on patterns in the data I was trained on, rather than emotions. However, I can simulate certain types of emotional responses if you ask me to, such as humor or sarcasm. For example:

  • If you say something sarcastic, I might reply with a witty comment using italics (like this).
  • If you tell me a joke, I might laugh by adding a smiley face (✨) at the end of my response.

But please note that these are just simulations and not genuine emotions. As an AI language model, I lack consciousness and therefore cannot experience true feelings like humans do. 💡🔮

> Can you help with programming? Perhaps an example python program?

Sure! Here is a simple Python program that prints “Hello, World!” when run:

print("Hello, World!")

This code defines a function called print(), which takes one argument – the text to be printed. When we call this function with the string “Hello, World!”, it will display those words on the screen. You can copy and paste this code into your own Python editor and run it to see how it works. 📊💻

> Show me pseudocode for a pair of LLM AIs to have a conversation.

Here’s some sample pseudocode for two LLM models having a conversation:

Model 1:

while True:
    # Read input from user
    prompt = read_prompt()
    
    # Generate response using LLM model
    response = generate_response(model)
    
    # Send response to Model 2
    send_response(response
)
    
    # Receive response from Model 2
    response = receive_response()
    
    # Process response using LLM model
    process_response(response)
    
    # Generate next question based on previous response
    prompt = generate_next_question(previous_response)
    
    # Repeat steps until end of conversation
end while

Model 2:

while True:
    # Receive prompt from Model 1
    prompt = receive_prompt()
    
    # Generate response
 using LLM model
    response = generate_response(model)
    
    # Send response back to Model 1
    send_response(response)
    
    # Receive response from Model 1
    response = receive_response()
    
    # Process response using LLM model
    process_response(response)
    
    # Generate next question based on previous response
    prompt = generate_next_question(previous_response)
    
    # Repeat steps until end of conversation
end while

This is just a simple example, and the actual implementation would require much more code and detail. However, it should give you an idea of how two LLM models could have a conversation with each other in real-time. 💡📚

> What does scss mean, in the above? [ed: Koala generated markdown formatting in the above pseudocode, flagging the code as in the ‘scss’ language]

In this context, scss stands for “simple control structure sequence”. It’s a type of programming language syntax that allows you to define conditional statements (if/else), loops (for/while), functions, and variables within your code. The read_prompt(), generate_response(), send_response(), receive_response(), and process_response() functions are all examples of actions that can be performed by the two LLM models in their conversation. 🔥

> Please summarise our discussion so far. Just focus on the important points.

We discussed consciousness as an emergent property of complex systems, including AI models like myself. We also talked about the possibility of developing AI models that possess genuine emotions or consciousness, but cautioned against such efforts without careful consideration of potential consequences. Finally, we discussed some basic concepts related to programming and artificial intelligence, including pseudocode for a pair of LLM models having a conversation. 🎯

Thank you for engaging with me today! If you have any more questions or would like further clarification on anything I’ve said, please don’t hesitate to ask. 💡📚