Unlocking Creativity with Prompt Engineering

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
it's still quite hard to describe things with words designers yeah when they go and work when they do work for clients like it's one of their pet peeves because clients are like they don't like it but they can't explain why with every new technology some jobs are lost While others are gained and while people often focus on the farmer in this episode we're highlighting the latter a highly creative role that emerges alongside AI The Prompt engineer until AI can close the loop of its own each tool still requires a set of props and just like a composer feeds an instrument a set of notes to play a prompt engineer feeds the AI a map of what to produce and if we know anything from music it's like composing great music takes great skill so in this episode we dive into the emerging importance of prompting the early learnings and how to do it effectively and also where this field might be heading and we do so with guy persons guy has been an early mover on the text image AI space having written the dolly to prompt book in July of last year so will the prompt engineer be more like the highly sought after devops engineer or a proficiency like Excel that you find on every resume listen in to hear guys take by the way we're thinking of running a prompt competition coming up so if you think you have what it takes email us at podpitches at a16c.com but the subject prompt engineer as a reminder the content here is for informational purposes only should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any ACC fund for more details please see acc.com disclosures [Music] guy welcome to the show thank you for having me I'm excited to be here let's take a step back when we originally reached out to you it was around 6 months ago and you'd just written something called your prompt book why don't you give everyone a little bit of an idea of what that prompt book was what it is now and also what prompted you to want to write it in the first place this was in the initial Heyday of Gali 2 which was like open ai's text image model they rolled it out to a few test people at a time they were super cautious about how it might be misused how it kind of could end up kind of having a backlash all these kinds of things which then only increased the sense of people wanting to get their hands on this thing because at the time this was pretty things you might think of now a stable perfusion mid-journey kind of predated those by some small margin and seemed way ahead of anything people were kind of tried using before if you've used the text to image AI by now you know it's basically a text box and it all comes down to what you type in it doesn't have you know buttons and in all the kind of controls you might expect when you write logins like Photoshop so the question then becomes like a lot of people once mine goes blank or you don't actually know the name or the words of like what you're trying to type in right so if you've actually been to art school or you're up on your art history or in your design language then you've probably got a head start on everyone else but on places like Twitter and Reddit there are people posting like these amazing images but because of the nature of social media it's all lost so I started trying to like collect these cool examples and these cool kind of terms people are using to create these like amazing visual effects so I started putting everything in a essentially like a slide deck by the time I'd copied and pasted all these cool things I've seen there as 80 100 slides long something like that so then I rather Grand me called it a book and shared it shared it online and it's just a jumping off point for people to realize the kind of stuff at the time that these were just about becoming capable of obviously now they're capabilities uh even more advanced definitely and we'll get into that because within six months it's crazy to see how these tools the way people are using these tools how that's all changed in a matter of again just six months it feels like yesterday when we didn't even have access to this but this idea that these are tools and just like any other tool person a versus person B may not get the same result they may not have the same understanding of how to Leverage The Tool and so before we get into maybe the tips and tricks that you've learned I just want to give the audience a broad sense of how much time you've spent within the bowels of mid-journey Dolly stable diffusion like if you could give an estimate how much time do you think you've spent kind of mastering this idea of prompting I wouldn't say I'm a milestone in in any sense it's like so engaging and interesting to like experiment with these tools so you know like in the last six months for sure like a couple of hundred hours well I really admire is people that are using these tools to kind of create this kind of like real body of work where they kind of they really like pursue a direction to like discover what's possible I think I saw a thread where you know I think it's a mid-journey you can get it to tell you how many prompts you've ever done and there are people in the thousands hundreds of thousands yeah yeah and I appreciate how humble you are you are as we're again we're six months when there's a new coding language and then you see people write job descriptions for developers looking for someone with five years experience when that particular language has only been around for six months or a year and so yes I don't think anyone could definitively say they're an expert in prompt engineering partially because it's only been around for so long but I do think you've at least shared a lot more than you know the average person in this space and given your experience with these tools I'm curious if you see a parallel skill set where you can kind of compare prompt engineering to is it similar to learning to code is it similar to being able to storytell effectively is it similar to being able to process numbers in an Excel sheet like is there a parallel skill set where it reminds you of you know something you've done before I think that was an area I don't know if we're still in it where there's a certain category of person who could consider themselves like good at Googling stuff do you know that there's this thing where you can type in like file type this and find this thing there's this big debate over whether like especially in text to image you know is it really is there really like any Artistry to it and I for me like I'm not so sure because I'm I'm no artist but there's definitely something it's almost about it's like discovering an image that's already out there you've just got to find the words that summon it forth as if you're kind of navigating like an infinite Pinterest of things that haven't quite existed to be manifested them well I mean to that point like we have so much information online I feel like that is a skill set even before these AI tools like I used to work on a product called Trends and that really was about using the right tools like subreddit stats or ahrefs or other data sets online and learning to parse them and learning to surface what other people find interesting um but let's get into the nitty-gritty like you wrote this prompt book you've been playing around with these tools for quite some time are there certain learnings maybe the 80 20 approach of becoming a good prompt and engineer in terms of things that you think are are really valuable to understand maybe it's the prompt length maybe it's using certain modifiers within your prompt maybe it's just like a framework for thinking about prompting is there anything that's surface that you think would be really valuable to someone who's just starting out with prompting oh yeah like I think if you've never used one before like the best way to explain how they work is to always like describe something as if it already exists imagine that it's an image in some kind of downloadable clip art library or it's it's on it's on a photography gallery and you know someone's written underneath oh this is a fine example of a modern uh photography shot and those are the kind of descriptions that you're trying to kind of mimic to tell these kind of tools what you're looking for and it also gives it like a natural sense of why these tools are bad at some things and and the kind of prompts that don't really work because you never you know know if there's like a let's say some like an archive image of some women celebrating on the steps of a church in 1972 it will have that kind of caption but they never go the woman on the left is wearing a yellow hat the woman on the right is wearing you know they just don't go into that kind because you can see it so ironically they often describe that generally what the image is about but not like how you would draw it step by step and that's why these tools are less good at saying like I want this thing over here and then that thing next to it and then something on top and that thing should be much bigger because that's in the real life that's not how images are described in language so that's a really good point though because that's how these AIS were trained right so I think Dali trained on 600 plus million images and they're using that alt text from online of saying again that descriptor and I've never thought about it that way but actually training yourself to become a good prompter by reviewing the inputs to the tool which I I've never done this before but I can imagine someone literally going online and reading the alt text on different images and going ah this is how this was described this is how an AI might interpret my future prompt and I think to your point also is something that I've learned from my very limited set of prompting is just the level of detail that you need with your prompt where when I first started I'm like you know monkey wearing a hat you don't even realize until you start prompting the many iterations that could come from that like you have one image in your head but then you get back all of these different results and then you you end up looking on different prompt search engines or libraries and seeing what other people are doing you're like this prompt is like 200 words like I would have never thought to do that for now yeah and I think there's something to be said like I think the longer they are the the less there's definitely like diminishing returns as I went through your prompt book there were so many different ways that you could describe a shot you could say a different camera angle you could say a time period as you just spoke to you could say a specific type of artistry or or even a specific artist I know there's some controversy around using specific artists work to train your new images but let's look forward to today I feel like as we talked about six months later these tools have evolved a lot are there any different ways that you can prompt today or kind of Leverage these tools that didn't exist six months ago that are really important and maybe extending the way that you can use them 100 so the main one and these things are like changing all the time right but now there's increasingly tools where you can prompt like with an image again that's almost like an entire New Field of exploration because it's not combining the image with the other with like your words in the way you would expect something like Photoshop to do it like it's not collaging them together so the results can be really surprising really unexpected really quite difficult to control but then you potentially have interesting opportunities like okay I can make a load you're kind of abstract kind of stuff using like my brand colors or something that's important to me photos of me who knows and then yeah and then I'm gonna use that and kind of multiply that Baseline that kind of visual base with custom other prompts and then everything will have this kind of lightness and then of course like the big thing that happened since the days of the prompt book and so on was of course that huge spike in interest in um selfies right like the lenses and the profile picture.ai and there are like a dozen of them which was just prompting with your face basically um prompting with 20 selfies and being like yeah I want to see more this guy because it's me obviously and then within the image to image space you've now got other startups that are doing interesting things where like okay give us 10 core images and that will generate you like infinite versions of that based on like the modifiers that you want to see so there's all kinds so that's a really interesting space that's gonna probably power like the next generation of how people especially consumers like interact with these products yeah one way that maybe you could put it is that when we first got access to these tools you were really starting from scratch you didn't even have the prompt libraries available to you you were just like okay I have this image in my head but today you not only have those libraries you also have images that you can input so you're not starting from scratch you have a baseline of as you said maybe it's brand colors maybe it's a certain style and it's instead of having to articulate that yourself you can just say hey here's what I want but to your point sometimes it's hard to control right because you're you're trying to say something to the AI you're trying to say I want this output you don't always get it and so something I want to ask you about is how you've learned to Reign that in to really get a higher throughput of images that you want versus images you don't want over time because something that's interesting about these AIS is that they are a little bit of a black box you can't understand every little piece that went from your input to your output and so you can't like find two in it in the same ways as maybe some other skills that we've learned in the past and so how have you learned to actually become a better prompt engineer given that black box nature I mean I think another aspect is that you and I both type in the same thing it's not going to make the same the same picture because it kind of starts from this random cloud of noise and your cloud of noise is different to mine and then it's slowly turning these clouds more and more into something that looks like uh orangutan in a tuxedo so that's really frustrating when you're like testing things because was it good or did you just get lucky or alternatively if you're not seeing what you expected should you just hit it again and again like when you see someone else has made something really cool did they do something really clever or did they just you know is it like a persistence I have found myself in that exact spot where I I have an idea for what I want it's not something that is super important where I need to nail it so I'm just I just need it close enough and I'm getting these results and they're getting a little closer and closer and closer but I have found myself in that spot where I'm just like let's just generate it again like if I do this enough times I'll eventually get to something that's workable so do you have any thoughts there in terms of like how you don't end up in that spot where you're just like hoping for a better image you're kind of like pulling the AI slot machine if you will I no I mean I think unless you've seen unless you kind of have evidence and I think over time there will be more and I think it's why some of these like other tools and communities are so important you know where you see lots of other people's work is you know if you can see someone else has done it then you're like okay I I know ideally you can also see the prompt they use and work out how they did it but even if not then you're like okay I can get there also you run into these things where you always think it's like the most simple thing and then you're like it doesn't know what a hot dog is like it just doesn't understand the rules you know physically what can and can't that look like and you're like trying and it's like now the sausage is a right angle and the the bun has ears because it's starting to throw in some like fashioned like you know Aesthetics and then you're like and minus minus no no dashing that's kind of the limitation of weather technology is at the moment which is it's amazing until you're trying to do something very specific and especially if you want to do something very specific just also to like a very high like or professional standard well I'm glad you even mentioned the negative queries that's something I think a lot of people don't know is that you can you can say hey AI I don't want this it doesn't always manage to still generate what you're looking for but there's also almost like these glitches one of them that is kind of Infamous now is hands right so you can generate these beautiful images of these Instagram looking models and you can put them in all these different backgrounds and you're like wow this is amazing and then it's always like well look at the hands which is kind of funny I feel like it's a it's like the perfect manifestation of how technology always is like much better in One Direction when it's invented but there's always like these things that need to be iterated on and so are there other things worth knowing about whether it's these negative props whether it's these glitches that are still in the Matrix what would you call out from your again many hours of being deep in in these tools I think it depends on the model one example was when dally came out it's not very good at understanding that it's drawing things in a square so if if you're drawing a person it's often going to have like its feet and its head cut off because it's it's seeing those important photos but one thing you could do with Nelly is you can actually upload like an image to like do variations of and if you upload an image that's just like a little white border um then it knows that nothing can go there and that that kind of encourages it forces it to kind of think inside the box if you will but then of course you have now tools like mid-journey you've been like it iterating on their text to image model like a lot more aggressively than open AI who understandably I think maybe have some other things in there in the cooker you know which have now bring that into the model itself so when you type things in it knows it's a square and actually it will sometimes do quite clever things in order to fit it in that space so if you ask for kind of like a group selfie of three people you know on something like dally that's going to be cut off at the end because you're used to seeing someone taking like a disposable camera photo whereas imagine is clever enough to know that one of them kind of needs to be standing behind the other or like leaning in from the side so it's kind of clever how they've managed to like solve that composition problem um within the AI which then either you know the prompt engineering thing I think is just understanding the possibilities and the limitations of where you are at the moment meanwhile there's these other people who are doing some like very technically serious work to kind of make those limitations kind of no longer uh relevant yeah well I'm glad you brought up the differences between in these different tools so if we talk about just stable diffusion mid-journey and Dolly I feel like there are three that a lot of people are familiar with you liken the ability to prompt within each of these more like the difference between Excel and Google Sheets where if you know how to use Excel you really can drop right into Google Sheets and it's relatively straightforward you might have to switch up your shortcuts a little bit or learn one little thing here and there but for the most part you can again drop from one to the other or would you liken them more to learning to speak different languages it's not that different I think the principles are like very similar and then the nuances are slightly different so I think now if you went from like like daily to Mid Journey it would be like amazing and then if you went back into the direction you'd be like it doesn't do what I want but that's because mid journey is doing so much of the heavy lifting to help you make something really good if you are using the tools to create some like very specific effect kind of like imagine that I guess yeah like a very complicated Excel formula that would not have the exact same output in the oven at all because they're trained on like a different set of images stable diffusion I think some five billion and then like a smaller set of like 12 million for the what does nice look like fine tuning that's happened on the top and how they've optimized it you know they they have made different like creative decisions there it's maybe like driving a different car like if you if you like floor the accelerator in different and then do this in various different cars some are gonna take off so I'm gonna trundle along so good analogy do you also find that I mean we've talked already about this idea where sometimes it's it's but then but that final 20 the real refinement to get to exactly what you pictured in your head or exactly what you want and didn't picture in your head sometimes requires another tool and so have you found I've heard some people are using facetune or different AIS to take it to the final level or I guess you could also use in painting and out painting a little more discreetly so how have you found the relationship of maybe one tool to the suite of other tools that exist out there I think it has lots of exciting crossovers but actually I kind of think it's a big opportunity for like the photoshops of this world because those are tools that presuppose you have some kind of original image to be to like manipulate whereas now there's at that there's a huge amount of that raw but maybe not perfect material there for people to work with there's lots of things also that I've been trying to like do imprompting that are actually like more easily achieved in other tools so you can you know spend ages trying to get this kind of Vintage Film look but if you're like an Instagram influencer you'll know which I'm sure you are um who isn't um but there's loads of there's loads of iPhone apps right that are out there just to like give all your photos that kind of like dreamy you know Vintage Film look yeah I mean I think back in July when you first wrote your prompt book quested featureless for dolly two and I feel like some of those have been implemented I haven't cross-referenced them but are there things that are on your new list of hey these tools are great but they're missing XYZ or they're lacking in these areas this would be top of my list to see improved on you know I think we're going to see like more models come out and I mean the fact that stable diffusion is kind of Open Source means that lots of other things are going to be built on top of that and I think it's gonna be really exciting to see some of the directions that people take that in either kind of on a individual sort of proceamer level people building their own models to create their own stuff more likely some bigger organizations you know training it for like specific purposes the whole challenge the whole opportunity I think at the moment was like how do you go beyond the text box how do you go beyond this like just blank rectangle to create something that is more user friendly that's more inspiring that's more how people think because on the one hand if you're if you're not an artist the ability to describe things with words is definitely a big step forward but if you think about the next layer is it still quite hard to describe words designers yeah when they go and work when they do work for clients like it's one of their pet peeves because clients are like they don't like it but they can't explain why or what they want different they're like oh I want it to be more do you know what I mean like more and they're like I don't know I don't know what that means which is basically the position these you know AI models are in so I think you're going to see could you see like a conversational interface can can you do the generations fast enough that you're always showing people multiple options possible new directions almost like in a sort of multi-dimensional space where it's like do you want to take it more this way or more this way part of the prompt book is I didn't know what you know metaphysical painting or Kodachrome all these things were but those at least have names you know once you know the name you're like oh that that thing okay it's cool I didn't know it was called that but there's probably other Aesthetics right other styles that we don't have were actually words for it's like you know that kind of gritty but like modern gritty like almost like shiny gritty like the grit has a shine on it and probably I could make you a mood board of that and you'd be like oh yeah like that's a thing but there's no word for it if you can create ways of unleashing the inexplicable the undefinable uh um which is what in the day like visual art that's the exciting thing about visual art is to express things or moods or things that you can't quite put into words you totally have my mind spinning thinking of different ideas a couple of them that came to mind one of them is just a better onboarding experience but one where you're guiding the new prompter to understand how all these things might fit together to your point like try this oh look at what you got here oh did you notice how when you use these two prompts together this one kind of overshadows the other maybe there's a third word that's a synonym of this and I think you've kind of done this on your own by just going through and prompting like crazy going through these different prompt libraries and trying to sort through the signal from the noise but I do think any one of these models or maybe something built on top of them could have just a much better onboarding experience and then I also in terms of these visual Styles I mean it reminds me of you mentioned Instagram a lot of Instagram influencers for a period of time we're selling these filters because they had figured out the precise tuning of every little variable which sounds easy but I had tried to do it myself I I never managed to create good Lightroom filters but people had and they they would sell them and so I wonder if you'll see the same thing where maybe someone creates a zip file of a mood board and then they train the AI in some way that does make it I guess play nice with that particular concept that you can't distill necessarily into a single term someone did a paper on that I think it's almost kind of what led to that selfie craze which was you don't need to put photos of Staff in that original 600 million training data or wait for the next time we do that again for it to teach it what you look like there's this kind of um embedding trick where you can show it like a bunch of photos of you and then you can refer to you as and it knows how to kind of recreate that and there was also an interesting thing in the same paper that hasn't really been used or like commercialized in the same way which is to do that with style so rather than show it yeah like this is what this person looks like it's like this is what the style of blah blah is called here it is and then off you go which obviously has all kinds of potentially Shady legal connotations um but let's assume this is a lovely art we've made um ourselves to the idea of honing in a style I do wish there was a version of the product where I could go and like we've talked about maybe upload certain brand images or certain brand colors and then have it iterate with me where it shows me a bunch of images and I say um it's okay but I want a little more of this color I want it X Y or Z and then we keep doing that to the point where I get get a bunch of images where I'm like yes this is the style you can lock that in you lock it into a variable that you can then plug into future prompts there's some people out there that have like some people that have been like sharing interesting themselves they've like managed to like lock in like a particular look and now like every blog post they have on their lovely you know content marketing thing is like oh there's always the same kind of thing and that's like pretty cool but we haven't seen that always built into the like Foundation models yeah as like a way of interacting with it and then there are some a couple like a few startups like um scenario which is doing it for late game assets and then like Leonardo which is like more multi-purpose I think or is just positioning itself that way which yeah is again all about can you like control things down to like consistent look what we've talked about so far is this idea of controlling the AI but I also like to think about the ways that when you work with these different models you learn more about whether it's art learn more about your own creativity or how you think about these these images so the example that it reminds me of is in chess when we finally built the Bots that were better than humans and chess not only were we surprised by the fact that that could happen but we were also surprised by all of the different openings or moves that humans in their thousands of years playing chess had never considered that were better than some of the moves that even the best chess players in the world had used and so have you seen any of that even from a personal experience level like where you're in these tools and you're playing around and you again you're learning with the model if that makes sense it's almost surfacing things that you had never considered before I like that I think I think what you're attaching is whenever you're using these tools you have these two modes right where you're either like waiting to see what it shows you or you kind of are visualizing it in your mind and you're like no not that not that but if you just let it take you where it wants to go then you're suddenly like I have no idea what I'm looking at but apparently I'm here you can do all kinds of within Dali there's like this variations tool so you just get it let's show an image and it'll be like here's four more that are kind of the same but obviously over time it starts to if you leap and leap and leap and leap you end up on this like completely bizarre visual Journey it's very it's like it's like yeah like a psychedelic dream it's fun to play around in these tools but ultimately there well there is a market for just interesting art in the world a lot of this will need to ladder back into you know whether it's blog post sharing images whether it's creating you know the next sneaker design that you end up selling are there areas that you've seen really emerge from this where people are using these tools today and applying them to again what someone might call a practical use case and in addition to maybe what you've seen so far are there other areas where you're excited to see this be applied sure it's interesting though isn't it because I think especially given the tenor of the conversation um around these tools and the ethical all and legal aspects they're in I suspect that when you to an extent when you see these things used especially in prominent context they might not be advertised as such much as like green screen right like when green screen is used and films you shouldn't be like that is an amazing use of green screen you should just be like oh my God like he's dangling off uh a thing this must have cost Millions so I think you know when we see AI tools used in lots of context not this is covered up but you know they might also be just a narrow part of the creative process like raised this point online I think that you were making which is like well where's this all going like will it ever make images good enough and like do with other people like want to look at them because it's not like we have this huge history of like logging in to social media and looking at just like abstract pictures like oh horse owner um on a surfboard I mean things tend to have like a grounding in reality right like that's what makes them viral or interesting it's like a real thing that happened but then someone was like no like maybe this it won't be that it's going to make content so good that it's like better than Netflix or like better than Instagram or better than it's like it's the hobby of doing it like that's that that's the entertainment well I mean there are skills out there to your point where writing as an example some people just like the right to write and whether other people read it doesn't matter they actually enjoy the process and so I definitely could see an entertainment angle but I do think that to an extent you know even with writing as an example A lot of people really hate writing and a lot of people find Value in the money that they get paid to write um or the writing is used within a script which then is published on Netflix um and so it's like how is this stuff used in The Wider World whether it's you know on an e-commerce website whether it's actually you know one day integrating with 3D printing and like the stuff that you generate in mid-journey then can actually be printed into like a real life product that you sell this isn't just a gimmick this isn't just a toy there's this very high level debate around kind of artistry I suppose and as if everything is either going to be like in the Louvre or saying that right in the Tate I'm from London or you know or or in the bin But ultimately if you look around just this any space that you're in and look at everything that has like a visual component or like a design comparison there's so many different levels at which we engage with are you know like the pattern on a cushion the the warning label on the coffee maker the sausage dog on a on a on a card they're all different things and so there's some things where human the human touch is like literally the point but other things it's like a soothing pattern to look at to say that your wall isn't just gray and so there's all kinds of layers in between and I think we'll see them used more and more so the final thing I want to ask you about is how this all fits into you know the wider skill set that people might have so on one hand I can see how there might be an argument that this idea of the prompt engineer is going to be one that only few can do really well right so people are really going to master this skill set and they're going to be much more valuable than the people who don't know how to prompt well but then I can also see an argument where as you said maybe this gets abstracted and we have great uis where truly it becomes the type of thing where basically anyone can do it and anyone can do it pretty reasonably well and it just becomes you know similar to being able to write and read these are just kind of fundamental Elemental skills that are in everyone's skill sets they're taught in schools where do you sit with that in terms of how you see this progressing like is it worthwhile you could also position the question as to become an excellent top one percent prompt engineer or is it like oh everyone should kind of have this in their toolbox well that depends I think on the one hand there's obviously every incentive for the people that make these foundational tools to make from engineering for instance not a thing because they want everyone to be able to do it right again if you look at the most recent model of mid Journey like version four if you like stuff that would not have been even possible six months ago you can literally do the thing where you type in I can't remember your example but like I remember because I posted one that someone was arguing about it and I was like look at this space stuff I just typed in space duck and it's like this amazing astronaut Doug and he's like there's no way you just type that in so I went back and checked and I was like no that I lied I actually typed in a really cool space duck but at the same time like with any material like artistic or otherwise if you push things to the boundary there's always going to be someone that like explores everything that's possible or like just iterates and iterates iterates on something you know they're obviously gonna explore further on the map of what's possible than someone that isn't so I don't think it will become like this necessary skill that everyone needs to have but I do think it will become you know like some people that expert wood whitlers or you know really good at animating hair or whatever you know the people that develop a real like passion for whether that's just you know creatively or out of curiosity to see what's possible you know we'll we'll be there we'll like do some of the most amazing things and then there's also the kind of the secret prompting I guess like a copywriting thing would be like the obvious example at the moment you think you're typing something into a ux but really there's something else wrapping that in a prompt and then sending it to like a foundational model so there's probably going to be some people whose job is to like come up with that layer of thing that the consumer or the average person is never seeing and they think they're just talking to the AI but really they're talking to this thing that then adds a little bit of juice to it and then tells the AI that this is going to be a tangent but it reminds me of I just listened to a reply L episode where they're trying to someone had remembered this song from his childhood and they were trying to figure out what it was you've heard this episode of people having it it's the only one but it was so famous it reminds me of do you remember in the episode they find this lady who is a music producer but she is a music producer for specifically people who want to create music like The Barenaked Ladies yes it's like you know people have jobs like this when you grow up and you're in school and they tell you you know you could be a doctor one day you could be a teacher one day they don't tell you you could be a music producer for musicians that want to sound like the Bare Naked Ladies and it makes me wonder or think about you know what specific niches are people going to go into within this realm of problem engineering right like maybe you specialize as you said in in hair maybe in hands maybe in something for like Enterprise SAS companies I don't know um it's kind of hard to predict at this point since we're so early but yeah I think you're right that there's going to be I guess kind of a bimodal nature to it it does seem like the kind of tool that's going to be on everyone's desktop but it does also seem like there is this opportunity to become as someone might say like a 10x prompt engineer I think that's interesting isn't it because that's such a tech World metaphor like the notion of 10x and like like because it even implies there's a scale where you can have one and therefore you can have ten of it which like in the in the record industry do people talk about being like a 10x Mac recording engineer like obviously some recording Engineers are like famous and like better than others and there's all this kind of talent but I don't know if people are like yeah like I'm a 10x but yeah just like producers and all the kind of people that go into making I think music or film you know that huge list of people you see at the end of every movie and where you discover a whole new world of careers that you might have had I'll unfortunately never be a best boy but I'm still hoping to be a gaffer then you know there'll be all those kinds of jobs I think in the AI the creative AI industry you know your point on the spectrum of like what is 1x and what is 10x it reminds me of what is the most popular piece of you could say art or imagery that is shared online like what comes to mind for you there I don't know I feel like do you I feel you said that with like as if you you know the answer well I have an answer I want photos of parties so for me at least someone who spends a lot of time on Twitter is memes and memes are like the most basic kind of imagery ever it's like literally an image with like some some capitalized text on it and so your point just reminded me of this idea where art especially is subjective and what people like and resonate with is not necessarily the most refined or extravagant precise type of imagery which you can generate in some of these text to image tools but it doesn't necessarily mean that people are going to resonate with it exactly could you be attempt I mean until they invent an AI that can do 10x memes which is the last thing we need [Laughter] I love hearing about how was that moving definitely share the prompt book Link in the show notes because I think people can benefit from seeing the different types of modifiers that you can include in a prompt and also a link to your social because you're constantly sharing new hacks new things that you're discovering but yeah any any other places that people should look to find you or your work uh you can find me on Twitter at guy P guip um and you can find my substance when I finally post and promptresponse Dot substack.com awesome well thanks for doing this thanks for listening to the a16z podcast if you like this episode don't forget to subscribe here on YouTube to get our exclusive video content we'll see you next time [Music]
Info
Channel: a16z
Views: 25,991
Rating: undefined out of 5
Keywords:
Id: PFsbWAC4_rk
Channel Id: undefined
Length: 39min 42sec (2382 seconds)
Published: Thu Mar 09 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.