Outer Edge L.A. Twitter Space: AI Unplugged - The Basics And Beyond | By NFT LA Live & Howl Labs

March 20, 2023
NFT Outer Edge L.A. Twitter Space | AI Unplugged

It’s AI unplugged in this session of Twitter Spaces by NFT LA Live (now Outer Edge) and Howl Labs as leaders in the field introduce us to a couple of interesting use cases for AI. Rana Gujral from Behavioral Signals, NFT artist REO, and data scientist Science Stanley give us a peek at what’s going on in the intersection between Web3 and the metaverse and AI. Get ready to have your mind blown as they talk about the most interesting developments in human-to-machine interaction, generative AI, data science, and more. The future is now as AI starts to revolutionize all the subdisciplines in the NFT space and take NFT technologies to the next level. Tune in and see what’s right on the Outer Edge of NFTs!

---

Listen to the podcast here

Outer Edge L.A. Twitter Space: AI Unplugged - The Basics And Beyond | By NFT LA Live & Howl Labs

Welcome everyone to another session of Twitter Spaces by NFT LA, now Outer Edge, and Howl Labs. My name is Daniel F. Along with me, I have the pleasure of co-hosting with two new guys. We have Josh and Ivan. How are you guys doing?

What's up? I am doing great. It is a nice warm day in Venice, California.

I can't complain too much. I've been a little under the weather but I wasn't going to miss this space for anything. It's good to be here.

Ivan, we need to use some AI to fix the bug there that you got to improve your immunity over time. We will work on it.

That sounds good but there are pretty interesting use cases for AI, which we're going to be discussing here. I'll let Danny kick it off. I don't have that many jokes under my sleeve but I'm powering through it. Thanks to everyone who's jumping in here.

Let's go. Without further ado, I'm going to introduce one of our guests. We have Rana Gujral from Behavioral Signals. How are you doing?

I'm doing well. I echo Ivan's sentiment of feeling under the weather. I caught a bug after my long trip but I didn't want to miss this fun conversation, so here I am. I'll make the best of it. Thanks for inviting me. It's a real pleasure to be here.

It's a pleasure for us to be joined by you. Let's go.

There are a lot of bugs. I have had a lot of people I know telling me they're getting taken down summarily by something. These days, who even knows at this point what's going around? I feel like at this point, it's not even Rona that I'm worried about.

There are all kinds of stuff.

Danny, what do we have next? We can kick off right away but the reality is this has been such a huge buzzword already. Everyone has been hearing about AI. Part of the space is us discussing how that's going to intermingle with blockchain. There is a pretty broad range of how this falls into place between how we use tech and even market it. Let's get to it. We're pretty excited. We have an exciting program of topics. I'm going to let Josh kick it off.

I would love to kick it over to each of our guests to start to introduce themselves a little bit and talk about how their world intersects with AI and Web3 to create some context for the conversation. Rana, do you want to start?

At Behavioral Signals, we're focusing on a very specific aspect of applied AI. We specialize in dialogue processing and building intelligent behavioral AI engines from dialogue processing, and focusing on the acoustics of a conversation, which is how something is being said. For example, in a dialogue or a conversation, there are two elements. There's the spoken word or the content and then there is everything else, which is the pitch and tonal variance, prosody, tonality, and intonation.

We are one of the original researchers of extracting intelligence from the tone of voice. What we do extract can roughly be put in three buckets. First would be aspects of emotions like anger, happiness, and sadness. There would be behaviors such as engagement, empathy, and politeness. The third bucket is an assortment and a collection of advanced classifiers that are built on raw-level signals. You're looking at macro-level KPIs and indicators such as predicting who's under stress or duress or even control predicting and identifying aspects of fraud or trustworthiness.

The most interesting is predicting intent. For example, it's predicting if the person involved in a dialogue with the specific domain in mind will do what they're saying. Will that action happen? For example, will the client buy or not buy? Will the debt holder pay or not pay? You could build a lot of custom models on top of it. That's what we do.

From a commercialization focus, we take that technology and apply it to a variety of human-to-human and human-to-machine interfaces. One of our core products is using this tech to create a conversational bioprint, which is codifying how a person converses, and use that to do intelligent matchmaking in a call center and impact a lot of KPIs. I can go a lot deeper into all of these topics but I'll stop there.

It's super helpful context. When did your world first intersect with Web3 and blockchain? What was your reaction at that time? How has your perspective on how these two worlds intermixed changed over time?

Various aspects of applied AI intersect into Web3. There are implementations in the metaverse. When you're building digital avatars, you are still trying to have a human experience. The question is, "How do you have a human experience in a digital space?" There are various aspects that would limit you and there are other aspects that would facilitate that interaction.

Various aspects of applied AI intersect into Web3. There are implementations in the metaverse. Click To Tweet

It becomes interesting. For example, when you're looking at emotions and behaviors, especially emotion, which is a very complicated science, it's multimodal. For example, humans express emotions in a variety of different ways with facial expressions, tone of voice, spoken word, and body language. When you are in that digital space, a lot of those things are gone. You're interacting from avatar to avatar. There are no facial expressions. There's no body language per se.

There are spoken words and then the tone of voice. The tone of voice plays a huge role. It's probably the most powerful indicator. There are some interesting applications there. We have had a very specific commercial focus that we have narrowed onto but we make the technology available to a variety of other interesting people who are building other experiences. That's where those intersects happen more in an indirect manner rather than a direct manner.

I don't want to cut off because I know the topic here is Web3, but there's some stuff you mentioned which is a good point of understanding the broad strokes that AI can make. You mentioned the power of simply recognizing something like the inflection of someone's voice and how that can predict certain behaviors. You mentioned the idea of how that might even let you know if a debt holder will pay his bills. If we move onto a metaverse space where we're using avatars, we can still measure his voice. I want to touch on the plethora of ways this could go.

If you're looking at, "Is someone going to pay the debt collector?" there's some level to, "Is this person going to have a delinquent payment on their account? Is this person realistically worthy of credit?" I'm a little interested in how you see that spinning out. In which verticals do you think people could take this tech? Maybe bring it back to what your particular big-picture mission is.

One aspect of predicting intent is to understand two elements of interaction. One is understanding the nature of the interaction. You can call it the domain or the context of the conversation or interaction. The second is understanding the state of mind of the participants involved in the interaction. You could check both of those off and have a good handle on both the cognitive state of mind of the folks interacting and have a good handle on the domain. You would need a good handle on the domain to build a custom model because you would have to build a custom model for each of those specific types of interactions.

For example, when we're building these machine learning engines, there's a very specialized model for conversation that's happening between an agent and a client in a call center. It would be a different model for a conversation that's happening between a doctor and a patient. Once you build those models with the right amount of data and the right type of data, then you have the ability to understand the state of mind, which is what our specialty is. We do that by extracting these signals from dialog processing, especially the tone of voice. You could then do magical things. You could do amazing things such as predicting intent.

For example, one of the early implementations that we did was a challenge from one of the clients. They gave us the task of predicting if a debt holder will pay or not pay. They wanted us to make a binary prediction. For example, a lot of percentage probability will pay or not pay, yes or no. It’s as simple as that, and see how accurate we are. We built a custom model around that. We were able to do things that surprised us. We were able to make a prediction in the first twenty seconds of an interaction. As soon as that, we can make a prediction. That prediction is typically anywhere from 82% to 85% accurate. You could take that model and build a variety of other interesting implementations on top of it, around it, or other interesting use cases.

In terms of Web3 and metaverse, there's a whole variety of implementations that could come in. It’s not necessarily in terms of intent prediction but mostly in terms of introducing an element of empathy and also enabling some of these human-to-machine interactions to be more human-like. That's where the big promise is. You're talking to these machines or machine-like counterparts, for example, voice assistance. You would want to have more of a human-like conversation. If you look at the current NLP landscape, none of the basic things have been solved. The NLP and NLU part, which is understanding the language, processing it, and speaking that language have all been solved.

NFT Outer Edge L.A. Twitter Space | AI Unplugged
AI Unplugged: In terms of Web3 and metaverse, there is a whole variety of implementations that could come in for AI, mostly in terms of introducing an element of empathy and enabling human-to-machine interactions to be more human-like.

Your neighborhood voice assistant does that well but what it can't do is understand the state of mind or the meaning behind that conversation and what someone has spoken, the intent, and the intention. You need that element to hold a conversation. A conversation is an essential element of interaction, especially verbal and speech-related conversations. You could now use some of these technologies to fill in those gaps and have amazing interactions and conversations that are very human-like or human-to-human-like even though you might not be speaking with a human. You might be speaking with a digital entity, a software system, or a machine.

This is super interesting. I don't want to press too much on this point because we have things on the agenda but the one question I do want to ask you is this. Do you see a world in which this can trace itself back to how we develop a digital identity? This is a hot topic. There is a capability. Even we at Howl Labs have been looking into the idea of how you use it. We're looking at a transacting level. How do I look at the blockchain? I look at transactions and activities on-chain, use that like a Merkle tree, and start thinking, "At least financially speaking, these are these individuals' behavioral patterns."

"Can I consider this person credit-worthy or worthy of reduced collateral toward a loan to purchase something via their crypto in a metaverse space?" I'm curious. Is there a world in which this builds into that? Can we technically be using AI and measuring the individual's social behavior within a metaverse space? That could be program-reflected in a non-fungible token, for example. What would you think are the implications of that? I'm trying to even figure out how far out we are from that. My guess is we're not very far.

We are already here. I don't want to say that we're close because we are here. A lot of these things are essential elements of digital twins from a lack of a better perspective. When you're creating digital twins, you're looking at what makes you human and the unique aspects of your humanness. One big aspect of your humanness is your normal emotional behavioral state and how that plays into various interactions. What we're focusing on is the conversation aspect. What we have also realized is that there is a unique conversational bioprint just as you have a fingerprint, which is unique for you. There's a retinal scan and other biomarkers.

There is a unique way each of us can work and interact. For example, we express a whole wide variety of attributes ranging from how fast we speak to the amount of energy and emotions we exude to other aspects of politeness, engagement, and empathy. When you understand those, you can create that bioprint. Once you have that bioprint, you can make very intelligent decisions about who is your ideal conversational partner. Those play a big role in stimulating and developing natural rapport and affinity.

Let me give you an example. We have all been in situations where we're having an adversarial, complicated, and even tense conversation with someone. Let's say you’re trying to negotiate something and you walk out of that conversation without getting your objective. You lose the agenda. You didn't win the negotiation but you feel good about the conversation. You would say to yourself, "I didn't get what I wanted but that was a great dialogue. I feel good about that conversation."

There's that one instance and then there's another instance where maybe you meet somebody at a cocktail mixer. It's a very casual and non-controversial setup. You're having a basic chit-chat. In twenty minutes, you're like, "This is torture. I want to run away." The question is, "Why does that happen?" It's because either your natural styles are clicking and you're getting into a flow, or you're colliding and you're unable to build any natural affinity or rapport. That's independent of the agenda of the conversation. Once you understand that and have that, you could magically influence future interactions. This would be a huge thing in the digital space.

That's very exciting stuff. Our additional guests have arrived. We have REO. Thanks so much for joining us, REO. I'm honored to have you. If you could start by introducing yourself and how your world intersects with AI and Web3 to give the audience a little bit of context, that would be great.

Thanks. I appreciate that. What's up, everybody? My name is REO. I am a visual artist, a music producer, and a DJ. I started drawing and dancing. I turned into making beats and then doing visuals for everything from album covers and directing music videos and stage visuals for Kanye, Beyoncé, Post Malone, and people like that. Right when the pandemic hit, all my tour visual jobs got canceled. I was in my house like everybody else, then NFTs came about. I have been working for other people for so many years, working with clients, and making digital art but it wasn't considered art. I was a content creator.

NFTs came along at the perfect time when I was ready to take a chance on myself. I started making a bunch of 3D art using Cinema 4D and Octane. I like making digital art but making 3D art was a headache because most of the time, you're putting out fires, "This is going wrong. The VRAM is this." It crashes. I'm used to making music where things flow. You wake up out of a trance and have this song sitting in front of you. I started to play around with AI in 2021.

It was this site called Snowpixel. You would send something, and it would take an hour to get something back. It was usually not even anything great but it was still cool to see it go. Every couple of months, I started seeing it getting better. Midjourney came. I've been using Midjourney for over a year. I noticed that it was very similar to the way that music was. It started to feel like I was finding my flow state but visually. It was thinking as fast as I was. I'm a pretty good Photoshopper.

I was taking some things from Midjourney version 2 and version 3, going in and repainting on top of it, and finishing the thought. I was using it as a collaborator. As it got way better towards the end of 2022, I started to take it and use more of it in my work. You can go see it on my Instagram. I've been posting a lot of Fashion Week candid photos. I've been focusing on AI photography in a way. I've also been a photographer in real life. There's the pressure of trying to get the shot and dealing with models or light and all these things. I can be on my bed without having to worry about how many video cards I have.

That's so dope. We will dive in deeper. Let's give Stanley a chance to introduce himself as well. This is all so exciting. You have such illustrious backgrounds. We could talk to any of you. I want to make sure that we cover the whole gamut here of what's going on. Stanley is one of my favorite human beings. Stanley, it's great to see you up on stage. We have had some fun conversations about AI over coffee and on stage. I would love it if you could tell folks what you're up to in the world of science and AI to kick things off.

Thank you so much for having me here. What I'm immediately up to is I had a delicious cup of coffee by the beach here in Venice. You will have to join me soon. I'm so happy to join this conversation. I can already tell it's going to be great. I felt your pain rustling with VRAM and Cinema4D. I've been there before. I'm so excited to talk about AI. My name is Stanley Bishop. I go by Science Stanley in the decentralized science world. I'm a machine learning scientist and a solutions architect for bioinformatics.

I use machine learning to study DNA and do things with DNA in medicines. I’m also a passionate supporter of artists. I've operated an incubator in Venice Beach that tries to partner artists with technologists to build cool stuff. At this moment, it's all AI happening. It's so fun seeing all these cool collaborations happening. I'm happy to be here and join the conversation.

Thanks so much. Ivan, we've got a stacked panel and a lot of topics to get through. Do you want to kick us off with a question for this crew?

We have gotten to the topic of artists. This is an interesting discussion about the creation of art and dissemination. If we're looking at AI-generated art, we know that NFTs will only increase the velocity at which that type of art could be created, hosted, minted, and put out. I am by no means an expert as those in the crowd. I'm going to bring up at least what I understand when we're looking at AI-generated art. AI is curating multiple and hundreds of thousands of works of art based on prompts or whatever is being input by the person that wants to create this. To an extent, AI is learning from all of these different styles and all of the different artworks that other artists have created.

I don't know if anyone saw this. There was a group of creatives and musicians that went and filed a lawsuit against two AI companies. I can't remember both of them. I know that Midjourney was one of them. The other one might have been Stability AI. This kicks off an interesting topic. How do we determine what is original? How do we determine what is not? How do we get into this very complicated field of IP loss, particularly when it's based around Web3 and iIt has crossover now with AI-generated art? I'll pose that question to everyone. REO, maybe I'll let you go first because you were the first one to bring this up, but I would love to hear all of the speakers' ideas on this.

Credit is an interesting topic. The quickest thing I can think of to relate to this is fashion. As soon as something goes on the runway, it immediately trickles down and becomes a trend. It ends up going from the high street all the way down to Zara and H&M. Most times, we don't know who created the blazer, who made the top hat, or who made the parachute pants. It becomes a part of the creative ethos, especially in music too. Somebody makes something, and it becomes a trend. The next thing you know, everybody is doing it. Credit gets lost in the sauce.

I do think it's important for credit to be given. As an artist, people have stolen from me. I would like my credit but we're going to have to be very careful in how we determine who is deserving of it because if we're all inspired by something and someone, then how far does that go? There's a digital fingerprint. The AI looks at an image and says, "You typed in REO to make this image." Do I get that? Is there another royalty who I'm inspired by? I don't have an answer. I've been thinking a lot about this and how far it can go.

That's very well said. I see Stanley jumping in. Go for it.

I was going to mention this is an area where on the legal front, we have developments and also on the engineering front. It might be that down the road, we rely on tools to help us understand this stuff. As a fun example of that direction of development, one of the ingredients of many of these generative AI models is a system called CLIP. It is part of the AI brain that connects images and texts, and understands the connections between those two things.

It's an invertible architecture. In the same way that you can create text and see an image, you can do the reverse, see an image, and create text. It might be that in the future, we reverse prompt images, and then that could end up being a part of how we figure out whose style is most present and who deserves the credit. There might be some math to help us out there.

Rana, I want to hear your ideas on this as well but I will point to something. Both of you have mentioned this idea of where credit goes and also where you point to the genesis of creation. Most of us would agree that all art is a response and a reaction to art that has preceded it. Art also has the historical property of being able to encapsulate the zeitgeist of a time and the moment in which it was created. There's a historical component to it. The question remains. It is one thing for me as an individual through my volitional experience to interpret art, feel it, see the universal in it, and decide to respond to it.

What happens when AI does this? Let's put it this way. AI is able to process, study, and curate a milieu of artworks that a human being cannot process in such a short span of time. I see two paths. One, as you mentioned Stanley, there is a world in which we are using AI to improve or perhaps speed up, and perhaps even expand the realm of creativity through which we can create art. In a sense, AI becomes curatorial for us. It can give us a canvas on which to work.

I do believe that we also delve into the second part of this, which is what happens when we have AI-generated art that has value, that is beautiful, and that is appreciated, but we know in a sense that all of that is the accumulation or the result of a set of heuristics to how it's being asked to interpret certain kinds of arts and then replicate something in that style. Is this original? Is this being created by the curator who put it together? Is the AI the one that has the credits to it? I'll pause there. Rana, I would love to hear your thoughts, and then maybe everyone can jump in.

We have to start by trying to answer the question, "What is art?" We can all agree that it can be many things to many people. It is subjective. No two people agree. To that extent, you would need to agree on whether a specific thing is art or not art. Many of us, certainly not all, will concede that AI-generated images can be considered a form of art in many cases, but the real question and the more contentious question in cases like this is, "Who is the artist or author, the human, AI, or both? What roles and to what extent?"

There's the underlying question around credit that comes to play. I don't want to say that I have the answers. We're trying to figure this out. There are technical complexities around even making that happen. This whole thing has evolved faster than building in the plumbings to make this even possible or make attribution possible. The term prompt-based is worth unpacking a bit. AI is a collaborative process. The human and the machine inputs are beef together to create the product.

AI is a collaborative process. The human and the machine inputs are combined to create the product. Click To Tweet

For example, if you look at Midjourney, users can type a string of words and receive visual outputs approximating the original idea. As a user, you can continue to iterate on those outputs, nudge them in a particular direction or concept, refine it, alter it, or co-create it. You can go on and on. The power to express ideas and concepts is potentially limitless. You can create a number of generative outputs of these programs. It's infinity before a user can even decide, "Where do we start?"

Because it's so complex and also opaque, we don't know the models behind some of these implementations in DALL-E and Midjourney. They have not made it public. There are a lot of misconceptions that arise from that. That's why people say that these programs are smashing existing artwork together to form something new. It's not as simple as that. There are a lot of other complexities that go behind the scene in making what we see possible. I'll stop there. Attribution is going to be a challenging thing to do. We have to figure this out, but I don't see easy solutions for that at this point.

I love that answer. I'm realizing that we're always on limited time. There are a few other topics we want to cover. Josh, if you would like, I could open up the floor to some of these things we discussed on the DeFi-NFTs crossover and everything else. Let's talk about DeFi and NFTs. If there's something else we can agree on, it's that the intersection of those two different niches is something that we're already seeing is going to continue to grow. Inherently, there are going to be a lot of questions we will have about data modeling.

How do we use the information that we're accruing from wallets? How do we use an individual's behavior between their NFT purchases and also how do they leverage the value behind these NFTs? We have seen Blur make strides there. We have seen plenty of applications come out of the idea of potentially collateralizing an NFT, like taking that value against the loan. I would love it if we can jump into it. Maybe you can give us your ideas on how can we imbue AI into the way that we're modeling data and using the data that we're seeing for individual users within the DeFi and NFT world.

To understand how NFTs can be enabled with our technology, we first need to understand that AI disciplines already have intersection points with the current generation of NFTs. The digital representations of NFTs rely on various building blocks or digital formats such as images, videos, and text. All of these representations map into different AI subdisciplines and various aspects of implementations around computer vision.

NFTs are mostly about images and videos. They're a perfect fit to leverage the advancements in computer vision and various techniques such as CNN and GAN-generated virtual neural networks. Transformers have pushed the boundaries of computer vision, image generation, and object recognition. All of these can be applied to the next wave of NFT technologies. I'm not an expert in NFTs but I can see those applications come in.

There's the natural language understanding or the stuff that we were talking about earlier. Language is a fundamental form to express cognition. That includes forms of ownership. NLU is an aspect of that. The idea of superposing language understanding to existing forms of NFTs to enrich the interactivity and user experience in NFTs is an area we are probably going to get into soon. I see some of those implementations happening.

Related to that, speech recognition is another aspect of deep learning that can have a very big impact on NFTs. You could use speech intelligence and capabilities such as speech recognition, tone analysis, and stuff that we do to power interesting forms of NFTs. Audio NFTs could be a perfect scenario for speech intelligence methods, which is the next step. There are a lot of different implementations. We can go on and on but AI is powering a lot of the subdisciplines that this NFT landscape is built on.

NFT Outer Edge L.A. Twitter Space | AI Unplugged
AI Unplugged: AI is powering a lot of the subdisciplines that the NFT landscape is built on.

How about you, REO?

DeFi is not my world. I wouldn't know how to comment on it personally.

REO, have you looked at the AI applications outside of the entertainment industry at all?

Which ones?

Healthcare, supply chain, or that type of stuff.

I haven't. I'm just on the music, the writing, and the art side. It’s strictly creative.

Let's cover that topic a little bit more. I appreciated learning a little bit more about that side of things for you. Where do you see AI going next in terms of music in particular? What are you concerned about? What are you excited about?

On one hand, I've been making music for a long time. I got in when you had to have an MPC $2,000 piece of equipment. It had a high barrier of entry and also a steep learning curve. Right after I got one of those and learned how to use it, I started seeing things like FruityLoops and Pro Tools. People could download the software and use it. They didn't have to buy that piece of equipment.

It's funny to see people now have Splice and people making loops and stuff that they can use, plug, and play. Here comes AI where you won't even have to have a musical background at all. You could just type in, "Drake and The Beatles make an album about cake." It's scary to think that everyone will have this power in all the years that I've spent learning how to do this. At the same time, seeing it from the art side, I haven't been intimidated because I trust my eye and my ear to curate things. I'm excited to play with these new tools and what someone like me with a musical background can do.

One last thing, what happens when the AI listens to my Spotify, likes my playlist and says, "You like these types of songs with these chord progressions and this tempo? What if I made you a song that's based on all of your likes?" It makes me something that's the most beautiful thing I've ever heard. I'm interested in what happens then because, at the end of the day, we're not in charge of whether we know if it's AI or not. When we hear it, we get the dopamine hit, and then we find out it wasn't real or it wasn't made by a human. That's going to be an interesting point.

This is a broader conversation that we had with Shelly Palmer on the show. It seems to come up over dinner and drinks all the time. Does it matter if a piece of art is created by humans or AI in terms of its perceived value to humanity?

I don't think so. Once we like it, that's it.

Does that devalue human creativity? Does that become an opportunity to enhance human creativity using AI?

We're a nostalgic culture. Now that everyone can take pictures with their phones, people still do film photography. People still buy vinyl. People still go the long way. The art and the practice of learning and taking that time are going to make it more special if you paint for real. I would hope so.

I'm going to move us away from this a little bit because we have already spent a good amount of time on AI art versus the value as humans use it. I want to bring up this idea. We have heard a lot about how we can do some certain level of social listening. We talked about how art could potentially respond to our emotions. As we have all seen, dynamic NFTs have been also a big point of discussion. We're looking at ways in which an NFT can evolve or change over time.

Let's say I create an AI-generated version of myself now. Over time, I have adjusted or viewed myself as aging. It's almost like I get a Dorian Gray painting of NFTs for myself. It might be an interesting topic to consider. How can we intermingle AI and dynamic NFTs to create art that evolves over time and changes perhaps our moods, tone of voice, inflections, or even the changing pattern in our artistic interests? I'll let you jump on. That could be something cool to explore. I'm certainly interested in what you have to say. Go for it, Stanley.

You thought up the question I wanted to answer. One of the interesting things is the role that reinforcement learning plays. ChatGPT, for example, has been around for a while. ChatGPT became very exciting when reinforcement learning was implemented. GPT was given the ability to interact with you to learn from the conversation. That reinforcement learning layer is going to be so powerful for art. We think about its primitive form in organic neural networks through the Nielsen rating system. We had a system where every season, we looked at the numbers. Based on that, we moved our culture in a different direction. We're going to be able to do that in artificial brains for sure.

The blockchain will be the cultural ledger. When you publish an NFT, be it a movie, a picture, or a song, there's a record on the blockchain of who used it and how popular it was. That's something we can feed into further models, especially when you consider Midjourney, which to my mind has distinguished itself for good model engineering. What they're doing that's special is taking aesthetic feedback from their artists and integrating it into their engineering process. This is going to be a huge part of the intersection between crypto, NFTs, and AI for sure.

I love that. Thank you for that response. It's on the money. There is something to say about how we're going to be interacting with this. Rana, let me know if you have some ideas on this. I'm interested to hear your thoughts on how we can use software like the one you're building to create this evolving art via dynamic NFTs.

To be honest, we haven't explored much. There are a lot of possibilities for applying dynamic NFTs and a whole wide variety of subindustries in gaming, sports, and art. A lot of different interesting applications come to mind. We're yet to explore it. It's an interesting topic. I can't say that I'm very knowledgeable about some of the opportunities there, so I'll let others speak.

What we're learning from this conversation is we have these three very impressive gentlemen using AI in so many different ways. There are so many ways to use it that you could have a panel of 50 folks up there doing different things with AI and having different perspectives. That's why this is such an important topic for us. Because these guys have such great backgrounds, I would love to at least give the audience a chance to ask a few questions, Danny, if that's okay with you. If there's anyone raising their hands that would like to ask a question about any of these aspects of AI using music and art to science, I'm happy to have a few questions from the audience. It looks like we've got some folks coming up on board. What's up? How is everyone doing?

I'm good. I've heard solutions to a lot of the world's problems with AI, Web3, the recession, trillion dollars of debt, economic inflation, and all these things. I have a solution.

What is your solution? What is your question? Were you just sharing your thoughts?

I created a poll. The premise is that the people of the world can make a difference. If it's a bear market or if it's AI, as a consumer, 99% of the world is what matters.

On that thought of how individuals can impact, let's see if we can get someone else from the crowd that might have particular ideas or a question for the speakers about how we can use consumer data or even what we know of how individual information is processed.

I'm curious. Stanley, REO, and Rana, do you have any questions for each other based on what you are working on and how the rest of you are doing different things in the industry?

I caught the first part of the digital twin conversation. I was so blown away. I shared a link to a project that I'm working on to build digital twins for community leaders to use as a tool in growing their communities. I was a little curious if people had any opinions on the role of the community in training good AI models. I'm thinking along the lines of the old saying that you are the 100 people you know best.

REO and Rana, have you explored using crowdsourcing to train AI?

I haven't got there yet.

The data is the essential core element of training AI and also the biggest challenge. Getting high-quality data and unbiased data is easier said than done. It's a hugely complex task. A lot of what we are looking at, including the models that we have seen from the generative AI, is crowdsourcing. It's built on community data, both public and non-public data, and the stuff that is copyrighted. To get amazing results, you need that depth and breadth.

The data is the essential core element of training AI and also the biggest challenge. Getting high-quality and unbiased data is easier said than done. It's a hugely complex task. Click To Tweet

In a lot of what we do in building specific emotional and behavioral AI models, the use cases lean into more natural interactions, not necessarily business interactions but day-to-day interactions. You have to tap into various community inputs both from various design approaches of modeling and also data. It's not a free-for-all. We do it by selectively and creatively engaging various cohorts and academic institutions. We're partnering with them yet making our technology available to them for free so that they can use it to build other interesting implementations on top of it.

They're thinking of ideas that we haven't even thought about. They're coming up with new implementations that are surprising to us and applying them to very unique data sets that they have curated for those implementations. The models learn from that and become smarter just like a human would be if it's exposed to a variety of different learning modes and data sets. It gets better at the core thing that it needs to do, whether that's pinpointing an emotional or behavioral instance or a marker at a given instance of time. It's a big aspect of what we try to do but it's extremely hard. It's a constant battle.

I wanted to ask a question. At least on my end, every 2 or 3 months, AI seems to double itself or get twice as good or twice as fast. If it continues at this rate, at the end of 2023, things will be pretty crazy. Where do you see this being in five years once we get past the creative stuff? We're even mentioning some of the healthcare things. I was hearing talks about something about Web5 being about mental health. Imagine an AI being able to notice that you're still depressed about your mother passing, and what it would tell you or what advice it would get you almost like a therapist or something like that. Do you see it moving into stuff like that? Where do you see it?

I would go off on a limb and say that's the most commonly debated question out there in the AI community, "Where are we headed? Where are we going to get to? Do we have the right tools for it?" The stuff that I'm exposed to on a day-to-day basis continues to surprise me and also underscore the fact that there's a lot more progress than what we see or have access to.

The folks that I interact with and folks that are doing amazing work toward building various implementations of AGI are all very much in the mindset that we are going to get to sentient AI very soon. There's a whole spread of opinions in terms of when that is going to happen. Is it 5, 15, 20, or 50 years away? I would think that it's sooner rather than later. It's uncharted territories from there on. We get to that point of singularity. There's a rapid inflection in terms of the growth and achievement of AGI. Amazing things are going to happen but I also think we don't know what's going to happen at that point.

The point that we don't know and shouldn't expect to be able to know is important to underline. I decided to go into artificial intelligence science when I read a book called The Singularity is Near by Ray Kurzweil. That book is wonderful. Anyone whose mind has been boggled by everything happening should check out that book. It posits that if technology does get to a place where it becomes self-accelerating and where the faster technology develops, we could wind up in a very different place than we can even imagine. Everyone, check out that book.

NFT Outer Edge L.A. Twitter Space | AI Unplugged
The Singularity is Near

There is one thing from the book that is fun. There's a saying, "AI is electricity." This might have been Andrew Ng now that I remember. We're trying to understand this transformation. We might only have one other thing to compare it to, which is electricity. The next time you go through your day, think of how many things you do completely different because we have electricity to help us. This ability to deploy very specific intelligence to support different tasks might be as radical a change, maybe more so.

That's a very interesting question. That's one that has been on my mind as well. Danny, do we have any other questions from the audience?

We don't have any requests coming up.

However, on that little lull, perhaps we have dazzled everyone so much that there are no questions left here besides going and reading more to explore this world. If any of the speakers have any closing statements, we can drop them in and thank everyone for having jumped on.

I want to say thank you for inviting me up. This has been a great conversation. I already learned a lot of stuff too. I appreciate it.

Thanks, REO, for joining us. I'm excited to follow what you're doing in this space. It's pioneering.

I was echoing that sentiment. Thank you for having us. Thank you for having me. It was a great conversation.

Stanley, are there any parting words, my friend?

Thank you so much for having me. This is great.

We look forward to seeing you at Outer Edge LA. It's from March 20th to the 23rd, 2023. Stanley is on a fascinating panel discussing the intersection of science and Web3. That's one I'm going to be checking out for sure.

To everyone that did join and joins weekly or biweekly, we appreciate having you in the audience. Until the next, Danny, is there anything we missed? Otherwise, you can continue on your sign-off. It sounds like everything is okay.

We do have some announcements to make. The main event, Outer Edge, in March 2023 is also running a hackathon. I wanted to read the announcement myself but I realized we're so tech-heavy on this Twitter Space. Why don't we get an AI to read it? Here's how an AI will read the Outer Edge announcement of a hackathon, "Here's a reminder that we're doing a hackathon this year as part of Outer Edge LA. It's happening Saturday and Sunday, March 18th-19th, 2023, the weekend before the big event. It's going to be the dopest hackathon you've ever hacked. Application-only."

"If you are accepted, attend, and submit a project entry for judging, then you are guaranteed a free GA ticket to the event. That's a $299 value. The price is going to go up and up from here. Make sure to apply now before it's filled up. The Outer Edge LA hackathon is where anyone and everyone worth anything will build the next big thing. You will have a chance to win $10,000-plus in prizes and other awesome perks. Join a community of coders, entrepreneurs, scientists, designers, storytellers, makers, builders, artists, and technologists in co-creating the future of Web3."

"That's right. It's for more than just programmers. Anyone with the skill to create collaboratively is welcome. Anyone can apply regardless of attendance at Outer Edge LA or the hackathon experience. We will be selecting participants based on the application they submit to ensure everyone has an excellent community experience. It's happening Saturday and Sunday, March 18th-19th, 2023, the weekend before the Outer Edge main event. Visit Hackathon.OuterEdge.Live to apply."

Dan, we should work on her bedside manners but that's still an interesting way to sign off.

Thank you so much, everyone, for joining. Remember, these Twitter Spaces happen biweekly. They overlap with our Twitter Spaces from Howl Labs, The Howl. We will keep an eye out. Remember to turn your notifications on for Outer Edge's account, the @TryHowl account, and the @EdgeOfNFT account so that you don't miss any upcoming Twitter Space. Other than that, that will be it. Thank you so much for joining. See you in another edition.

---

We have reached the outer limit at the show. Thanks for exploring with us. We've got space for more adventures on this starship, so invite your friends and recruit some cool strangers that will make this journey all so much better. How? Go to iTunes, rate us, and say something cool. Go to EdgeOfNFT.com to dive further down the rabbit hole.

Important Links

Top Podcasts