Why Reid Hoffman believes A

0
37
Why Reid Hoffman believes A

Why Reid Hoffman believes A

On this week’s episode of Fortune‘s Leadership Next podcast, co-hosts Alan Murray and Michal Lev-Ram talk with Reid Hoffman, cofounder of LinkedIn and partner at Greylock. They discuss the pros and cons of generative A.I.; why Hoffman thinks the “A” in A.I. should stand for “amplification” instead of artificial; and the clone voice he put to work for the audiobook version of his new book, Impromptu: Amplifying Our Humanity Through AI.

Listen to the episode or read the full transcript below. 

Transcript

Reid Hoffman: The reason I like light bulb jokes is I feel that they’re a form of cultural haiku. How many Californians does it take to change a light bulb? Five. One to do it. Four to share the experience.

Alan Murray: Leadership Next is powered by the folks at Deloitte, who, like me, are exploring the changing rules of business leadership and how CEOs are navigating this change.

Welcome to Leadership Next, the podcast about the changing rules of business leadership. I’m Alan Murray.

Michal Lev-Ram: And I’m Michal Lev-Ram.

Murray: Michal, you want to tell us why Reid Hoffman was telling light bulb jokes at the beginning of this episode?

Lev-Ram: Because they’re funny. But really, the reason is that this has become kind of his own little Turing test for ChatGPT. And what he realized with GPT-4 is that it’s finally, this technology, A.I., is at a stage where it can tell some pretty funny, and, you know, compelling and human-like light bulb jokes. And he’s tried this before with other iterations, other technologies. and that hasn’t been the case. So this was kind of like a little bit of an a-ha moment for him. This is ready for primetime.

Murray: There is no question that A.I. is the topic of the moment. Every conversation I have with a business leader these days, sooner rather than later gets into A.I. ChatGPT, I think, has captured everyone’s imagination.

Lev-Ram: Yeah, and that’s why we thought it would be a good topic to discuss on Leadership Next. Reid knows a lot about generative A.I. and A.I. more broadly, for many other reasons. He is the cofounder of LinkedIn, perhaps he’s best known for that. But he’s also a partner at the VC firm Greylock, and he was an early investor in OpenAI, the company that, of course, developed ChatGPT. And he’s written a book called Impromptu: Amplifying Our Humanity Through AI.

Murray: It’s a great book, but what’s distinctive about it is he actually used ChatGPT-4 to help him write the book.

Lev-Ram: I think Reid is out to prove that singularity is here. He is well on his way. And he had a lot of really interesting things to say, you know, how this technology is being applied today, what it can do in the future. And, you know, he’s also kind of like the light bulb joke, he’s fun to talk to.

Murray: He is fun to talk to. Let’s go to it. Here’s our conversation with the real Reid Hoffman. This is not a clone. This is not ChatGPT. This is Reid Hoffman himself.

Lev-Ram: Reid, welcome to Leadership Next.

Reid Hoffman: Great to be here.

Lev-Ram: All right. I’m going to start with your book, since that came out recently. I received a personalized copy just a little while back. And I’m curious, you know, obviously, you’ve been really early on in all things generative A.I., but what prompted you to write this book and especially the way you went about it, love to hear?

Hoffman: Well, I love the fact that you use prompted, given Impromptu is the title of the book. I’m sure with a general literary wit, that was a deliberate… 

Hoffman: …word. Exactly. It started with a kind of realization, when I got access to GPT-4, July, August last year, I realized that this was going to be the watershed moment that I had been predicting was upon us. And I wanted to kind of demonstrate some of the my thinking on it. And my thinking, you know, reflected in the book, is that artificial intelligence is more “amplification intelligence” than artificial intelligence. And I said, Well, how do I show that? Not just tell it, but show it myself? And I was like, well, I could write a book. And I could write a book, using GPT-4 for as my co-author, the first book about A.I. with A.I. as a co-author. And then I said, Okay, well, what should it be? And I was like, well, maybe a travelogue through the different areas of human concern and experiences. And obviously, you can’t get them all, but to select a set of the important ones. And then the personalized copy that you got is, as I was starting to work on that, I realized that among the many transformations that A.I. as a personal assistant, a personal intelligence helping you brings to you, is you can do this mass kind of book where you also are doing one-to-one, and so you can do prompts that are specific to a person, have the book be specific to a person. And it was like, Okay, well, let me do that, too.

Murray: Of course. 

Lev-Ram: I want to hear also about the kind of the a-ha moment for you, not just for the book, but for the technology. And what’s your deal with light bulb jokes? Has this been like a long-time thing for you?

Hoffman: Well, that’s in the personalized content thing. And the reason I like light bulb jokes is I feel that they’re a form of cultural haiku. Right? You know, how many surrealists does it take to change a light bulb? You know, fish. How many Californians does it take to change the light bulb? Five. One to do it. Four to share the experience. You know, things that that kind of like encapsulate in this little haiku moment. You know, it may it might be a bad stereotype, but kind of a stereotype lens that fits within our kind of cultural experience. And it was one of the things that was amazing [about] GPT-4 is that it has, it has a sense of humor. And it can at least do kind of dad jokes, of which, you know, light bulb jokes can also be a version. And then for me, the a-ha moment started years back, it was part of helping stand up OpenAI.

Lev-Ram: And you’re one of the earliest investors in this company, we should say. 

Hoffman: Yes, exactly. I helped Sam and Elon and others set it up, and then joined the board. And then in February, you know, felt that there would be potential conflicts between all the startups asking for special access and all the rest, which, until I left the board, I was like, I can’t help you. And you know, it’s like, well, given my Greylock job, as an investor, it’s always a call, it’s not usually the answer I want to have. And so we talked about it and Sam said, Look, you can continue to help the company very well not being on the board. And I’ll continue to do that, and kind of fit my fiduciary and board responsibilities as such. And it’s like, we’re realizing finally the benefit of the transistor. Or if you want to look at it in a different lens, Steve Jobs said, the computer is a bicycle for the mind. And now we have a steam engine for the mind. And we’re having a cognitive industrial revolution. And I knew that that would come, and exactly which year and exactly which shape, I thought it would come with the launch of GPT-4. And actually, since they launched ChatGPT, with 3.5 as the backdrop, and everyone could suddenly start using it. It was actually the ChatGPT that kicked off the, oh my gosh, you know, this important moment is here now.

Murray: And it was with a light bulb joke.

Hoffman: Well, it wasn’t with the light bulb joke. That was part of my general exposure. I mean, I was also doing things like, I did this mini series on gray matter of fireside chat bots, podcast interviewing ChatGPT. So 3.5. And one of the things that I used that, because I’ve been using GPT-4 for to do this, was how would you apply Wittgenstein’s theory of following a rule and language games to large language models. And the fact that it gave me coherent interesting responses was stunning, because it already means we have an A.I. that has superpowers, because most human beings on the planet cannot answer that question coherently. And so it was the fact that it could was was just like, you know, mind-blowing and awesome.

Murray: Reid, I’ve been talking to a lot of CEOs of large companies since generative A.I. popped into my consciousness, which was much later than it popped into yours, but I’d say last November. So I’ve had many of these conversations. To a person, they agree with you that this is transformative technology. But I have to say, most of them, maybe even the vast majority of them, don’t quite know how. They’re still not quite sure, what the hell do I do with this? Can you provide some guidance? What the hell do they do with this?

Hoffman: So three lenses. First lens that I published last fall with my partner Saam Motamedi from Greylock, which is every professional activity. Obviously, there’s wide ranges from you know, journalism, law, medicine, you know, engineering, research analysis, investing. Every of these activities will have essentially a personal A.I. assistant or a copilot within two to five years. And that means that that assistant will be between useful and essential. That itself gives you industry transformation, because if you think about, every industry has a bunch of professional activities, and that amplification and that change will change. You know, I wrote an essay last year, when DALL-E came out saying, Look, this is like having Photoshop, like if you’re a graphic designer, and you don’t know how to use this image generation, it’s like just saying, Well, I’m not I’m not a graphic designer, just like I didn’t know how to use Photoshop. It’s kind of a similar kind of amplification. So that’s lens one. Second lens is there’s going to be a shift in capabilities, kind of in the more general sense, which is kind of think of it as research assistants. So what what these things are is like a research assistant that gives you an immediate answer. Now, the immediacy is amazing and important. Now, it will also be, although, you know, OpenAI and Microsoft and others are working on this, occasionally quite wrong.

Murray: I’m glad you said wrong, and not hallucinations or some sort of word that’s sort of fuzzes it over. It’s incorrect.

Hoffman: Yes. Exactly. It’s incorrect, and it’s incorrect with seeming vigor and strength and deep articulation.

Murray: We have some journalists like that.

Hoffman: It’s not an unhuman characteristic. And then the third is how products and services will actually, in fact, be changed. For example, let’s think of like one of the areas where I think there will be substantial job impact, which is customer service, because it’s cost center, and anything that’s a pure cost center, people will try to figure out well, if you 10x every person and you 10x every customer service rep, well, then we’ll have 10% of them. But say for them, you’re looking at that function, you’re going well, what if we could now make this function not just a, what’s the cheapest way we can get you off the phone? But we could make it a relationship-building moment, a brand-building moment or we could help you and, and kind of interact and give things to you from our particular brand perspective and build our relationship with you? Well, that’s now available as kind of a new product.

Murray: And Michal, if I could follow up on that, because that’s a great, those three frames give you a great sense of what it can do. Can you just hit a few more notes about what it can’t do? Obviously, it can’t fact-check. We’ve established that. But it also can’t really reason or, you know, somebody said it doesn’t do math. Can you talk a little bit about the limitations?

Hoffman: Yes.

Lev-Ram: Even by the way, Reid, in your book, you mentioned, I think, asking GPT-4 for the fifth line of the Gettysburg Address and how challenging that is for the technology, which I found interesting. 

Murray: Counting.

Hoffman: Yes, exactly. One thing and like, easy way to screw these large language models up is ask them about prime numbers, things that human beings can understand pretty well. And it’s very easy to get them to be equally insistent about something that’s wrong on prime numbers. So one cautionary note on, and I will express limitations, is that the technology is evolving a lot. So for example, both OpenAI and Microsoft are working to have current information. They’re working to reduce hallucination. They’re working to have, you know, kind of sources of information or ways that it’s error rate, kind of, in general, more approaches a human being’s error rate on these things? 

Because remember, you know, our standard is human being, right? And that is not error, error free. And so, so you know, math, usually when you ask, because because it’s trained to try to be super interesting in its response to you. Like, what, what would be really compelling and interesting to you. And if you ask it a question, it doesn’t really know very much about—like I said, maybe it would know, Alan, your biography, but maybe it doesn’t. And I say, wow, you know, did Alan, like, ask the question, which, which account presumes a yes. Did [Alan] create a really interesting journalism video game? And it’ll go, Oh, shit, maybe he did, and I don’t know about it. And then we’ll create this Wikipedia page about how you created a video game about journalism. That’s interesting. So that’s the kind of thing. And that’s also, you say, give  me citations. And it goes, Okay, well, he really wants citations. So we’ll make some citations. And those citations are incorrect. Like, you wouldn’t look the citations. And here’s the most funny one, one of the personalized books that I sent out, which I didn’t cross check. Some of the prompts I cross checked, because I wanted to see. But other prompts, I didn’t cross check, because I thought, Oh, it’ll just get this right, it’s fine. So, create a music list for you. One of the music lists it created, it created three fictional songs, like those songs don’t exist. And you’re like, Oh, I wasn’t cross checking that because I didn’t think it would get that wrong. 

Murray: Can’t we do better than that?

Lev-Ram: But why does it do that? I mean, is it just aiming to please humans? Like what, why? Why does it make up stuff?

Hoffman: Fundamentally, that. Because it’s trained to be generative and creative and interesting. And obviously through, like, algorithm and human feedback, we’re trying to train it to be true as well, in which case, it frequently is true. And it’s safer when it’s not factual stuff. It’s safer when it’s like principles, like, you know, what would be the questions one would ask in due diligence of a technology company of type X? You know, hardware software. It’ll be pretty good about, like, the general class questions and so forth. Because as opposed to being factual about it, and going, Oh, I don’t know, so I’m going to invent because I’m creative. It’ll go, Okay, here’s the stuff and it’ll be very, very good. And so that’s the reason why as a co-pilot, as a personal assistant, it’s really good. Now, I think these are solvable problems. I think the math stuff is solvable problems. I don’t think this is a these will always be this way. But it’s kind of a snapshot in time about how to use them as a system, how to use them as a transformation of work. And that’s again, one of the reasons why I say well, well I’ll just have you know, GPT-4 do my marketing. Well, that could be a bad idea.

[Music starts]

Murray: Jason Girzadas, the CEO-elect of Deloitte US, is the sponsor of this podcast and joins me today. Welcome, Jason. 

Jason Girzadas: Thank you, Alan. It’s great to be here.

Murray: I have a sense, Jason, from conversations on Leadership Next and elsewhere, that business leaders today better understand the benefits of having a diverse set of voices at the management table. But what are some of the lessons you’ve learned through Deloitte’s own DEI journey?

Girzadas: Yeah, lots of lessons learned. I think we’ve certainly made progress. We feel like that’s a function of a couple of things. Deloitte is very proud to have published twice a transparency report that sets forth long-term expectations for the diversity of our workforce, and how we hold ourselves accountable. That is meant to be and, I think, has served to be a role model stance for us to take, and one that we encourage all businesses to replicate. The second is to get specific. In addition to transparency, the specific objectives around gender diversity, around Black, and Hispanic, Latinx, as well as other cohorts that we have really established, not only recruitment and retention, but also advancement goals for. And finally, adding to the mix, how we intend to hold ourselves accountable for supplier diversity, as well as longer-term ambitions for us in this space. So our experience is somewhat emblematic of what a lot of large organizations go through. But for us, the commitment and transparency, as well as the specificity around cohorts, has made a difference. And we’ve seen positive results in the last two years that we’re hoping to build upon. Do we declare success? Absolutely not. But it’s made all the difference for us.

Murray: Jason, thanks for your perspective and thanks for sponsoring Leadership Next.

Girzadas: Thank you. 

[Music ends]

Lev-Ram: I think it’s fascinating that you kind of have your own little personal Turing test for this technology, which is the light bulb joke. And clearly, one of the reasons I think that, you know, this has exploded into kind of the mainstream consciousness, is because it’s so creative, and so fun to interact with. But there’s a lot of concern, there’s a lot of fears, and disruption to the labor market, and call it amplification artificial, or whatever you want to call it. How should CEOs be talking about this to their employee base? We’re seeing IBM’s CEO has already come out and said that, you know, this will impact 30% of jobs in a in a certain category. But there’s a lot of fears, there’s, you know, the writers strike and Hollywood, like. That’s one of the fears is that they’re going to be replaced. I mean, Alan and I could be replaced, you know.

Hoffman: Not anytime soon.

Lev-Ram: Well, maybe next time, we’ll have your co-writer on.

Murray: Thank you for that, Reid. We’ll take that to the bank.

Lev-Ram: But really, like, how are CEOs thinking about this? How should they be thinking about this? What’s your advice to them? And, you know, also curious to hear these are a lot of questions. Sorry. But are tech CEOs looking at it differently than non-tech CEOs, do you think?

Hoffman: Tech CEOs are probably a little bit more familiar and a little bit ahead of the curve, but probably it’s similar as a group, as a tribe? So one lens into this is to think so you, you said okay, well, these assistants, these co-pilots, give everyone 10x superpowers. Look through a company say, Well, you got salespeople, you going to have less than because you have 10x superpowers? No, no, we like sales, even 10x sales or whatever else. Now, the jobs will be different, like so for example, oh, we hire these people to be running our digital ad campaigns and it’s a lot of form filling and all the rest. Well all the form filling stuff is going to be, you know, really amplified. We don’t need as much people doing that. We’ll need more people doing things like thinking about like, well, what are the other ways to think about it and what to do. And if you walk through most of the areas, product engineering, operations, finance, even legal, by the way for other things, people are very hopeful that legal bills will go down. But you go through a whole thing and you go well, actually, in fact, that doesn’t necessarily, it changes, it transforms the nature of the human job, but doesn’t necessarily go okay, now we can slash and burn. Now, the IBM CEO’s comments, I think that was a little bit of a kind of allowing, let me justify in a difficult market the fact that I’m kind of doing layoffs, and freezes and so forth and let me blame A.I. And I think we’ll see a lot of that. It’s way too early to be saying 30% of this job function is going away. The tools aren’t there yet. They might get there. And if you think you have no upside in your business, and you only have to, to cut costs and downside, well, then that will be a natural thing of how you increase profits. I’m not saying it’s a clean sailing, blue skies, you know, etc, etc. These transformational moments will be real. There will be job transformation. There will be some jobs that there will be off for this and navigating all that’s really important, both as CEOs and as societies. Now one of the things I love about A.I. as a technology and again, part of the reason why I did Impromptu was to say, well A.I. can be part of the solution. Like say, take customer service. You say well all right, a bunch of customer service people are not going to have jobs. All right, well, how do you reskill them? How do you help match them to other jobs? How do you give them superpowers to do other jobs? Well, A.I. is an answer on all three of those things. And so when you say, Well, what should we be doing as leaders? What should we be doing as government people? What should we do? Well, let’s help people. Let’s use the technology to help do the transition to being in the full swing of the cognitive industrial revolution.

Murray: Reid, you’re an optimist and Michal and I are optimists and, and I think…

Lev-Ram: Wait, why did you lump me in with the optimists here? 

Murray: Okay, Michal is sometimes an optimist. And I think there’s some there’s historical experience to support that optimism. But I want to take you down a dark hole here for a minute. I mean, I’ve been a journalist since I was nine years old. Michal has been a journalist her whole life. We were raised on a great respect for facts. We believe in facts. We think facts actually exist. That there is, you know, in some areas, there is, you know, discernible truth and we were trained on techniques to find it. That’s obviously deteriorated in recent years. Social media certainly has something to do with that. The fact that everybody is in our business now has something to do with that. There are lots of other reasons that you can cite. But I’m really worried about this, that this was loosed upon the world with zero respect for facts, and what is the effect going to be on our society as we continue to devalue and undercut the factual basis of our interactions?

Hoffman: Well, as a philosopher by training, I am also a great believer in truth with a capital T and facts with a capital F. I wouldn’t say it was loosed with a zero respect. There was a lot of effort to try to get factual information. And so it doesn’t mean it’s perfect, its error rate is higher than we’d like for sure. Also, by the way, there’s easy ways to do this. There’s this whole stack of how the tech is going, which is, like, there’s going to be this area of meta prompting. And if you put meta prompts in that have something you say, Well, this is a fact and use this as part of your response, it will then conform to that fact. So I don’t think the zero regard for facts, I think it’s a it’s a nice slogan, but not true. Speaking of facts, but on the other hand, I completely concur. Oh, my gosh, have we been having a degradation of civil discourse of the importance of truth seeking of discerning facts, and that we need to be there. And then we need to figure out how we get there as kind of human beings. And by the way, A.I. can help with that. So for example, one of the things that I most liked during the election, you know, this is a Twitter pre-Elon, was one of the things that Twitter was doing, which is say, hey, if this if something seemed very off, expert consensus, open a little box around it, and say, Look here to get the facts. Right? It wasn’t saying this is wrong, you can’t say the moon is made out of blue cheese, or 2020 was an unfair election or whatever. But you could say, Hey, if you’re saying, we’re going to put this little box around it to direct people to say, Over here is where you can find facts. And that kind of thing is the kind of thing that A.I. can help with a lot. And so I think it’s more of a human problem to solve the problem that you’re talking about, Alan, and I want to solve it, and I think we should. I think it’s necessary because I think, what we should be doing, and it’s one of the things I love about good media, of which, you know, part of reason I’m on this podcast is I agree with you guys on the stuff, is to say, we should be collectively learning. Like, there is such a thing as facts, there is such a thing as a truth and we should be learning towards that together, and it’s an infinite journey but that’s a good thing to do. And so I’m strongly bullish on that.

Murray: That’s good. And if I did overstate my question, and if I did, it’s because I asked ChatGPT to write my short biography, and it made me 10 years older than I actually am.

Lev-Ram: I think you are so personally offended… 

Murray: I was. 

Lev-Ram: I think, you know, the hope is obviously that this is not only that the people who are leading the charge here are going to be thoughtful about it, but also that the regulatory forces that be actually make some smart decisions here. We’ll see. But in the meantime, it’s just it’s moving so fast, which makes it so much more difficult, right?, to do all that good stuff in conjunction. As an investor though, putting your investor hat on, huge opportunity, you know, OpenAI aside, I feel like as a journalist every other pitch I get, actually all pitches I get, have some generative A.I. slant at least. Like what happens next? Are you just, are you seeing you know, boundless opportunity? Is there some shakeout? What percentage of it is kind of B.S.? Like who’s really utilizing generative A.I.?

Hoffman: Well, it’s just like any of these major tech waves. Even though I think A.I. is the most major of my lifetime, in part because it’s a crescendo. It builds on the internet, it builds on mobile, it builds on cloud and it’s an amplifier across all it. 

Lev-Ram: Is it more major than any one of those individually? 

Hoffman: Yes, because it’s an amplifier, right? It amplifies on top of that. But remember, like internet, we had all kinds of crazy stuff. Mobile, we had all kinds of crazy stuff. And so it’ll be a bunch of crazy stuff too. It’ll be like, they’ll be, you know, like, it’s not really A.I. I’s overstated claims. It doesn’t really do what it claims to do. We’ll have all that stuff. That’s, that’s human entrepreneurship, when everyone’s running towards the gold rush. You will also of course, have many many amazing things. And so that’s, like, super important things for us to to kind of move forward on. And of course, then the investing theory, you know, is well across all this now, I had the the fortune of position to have seen this early. So we at Greylock started investing years ago on this stuff, which, you know, is like Adept and Inflection and Crest and Snorkel and all these all these companies and all of our, our portfolio companies started pivoting towards kind of the generative A.I., increasing their features. And you know, like Tome and CODA and everything else well before the public market realized it because, you know, that’s one of the benefits of having a lucky venture firm along with you. And I think there’s a ton of stuff that’s still available. It’s not just like, oh, really good investment was two years ago or three years ago. I think there’s a bunch. You have to be discerning about a lot of the principles that still apply within business, like, what’s your go to market? What’s your competitive differentiation? You know, why is it that this will be a good for example, startup product, versus a good product from a larger company? Because there is, you know, there are some places here, not just the usual set of customers kind of in depth enterprise relationship. Some other advantages that the large companies have is, well, if you’re going to be training a compute on a multibillion dollar computer, you know, large companies do multibillion dollar computers much better than startups. So you have to kind of sort through all that as an investor. But you know, I think there is just, what is it? There’s gold in those hills.

Murray: So there’s one other issue that I think we need to address. And that is, what does this do to intellectual property? If somebody can take this podcast and create the Hoffman voice, how do you stop that? Or if somebody is painting pictures in the style of, how does the artists stop that? I was talking to somebody who’s pretty deep into the technology, who said, the first big challenge of the Supreme Court on this will be a copyright challenge. So what’s the answer to that?

Hoffman: Well, I think we’re going have to work out new law for it. I think the old law won’t apply exactly right. Because by the way, if I created a painting, you know, me in the style of x that’s allowed. If I, you know, said, Hey, I’m going to, I’m totally incompetent at this so couldn’t do this, but I was going to take either of you and try to, like, like, voice impersonate you, that’s allowed. You know, I can’t say that I’m you but I could say that it’s in the style of and that’s an allowed thing. So now I have this tool that I’m doing with it, that suddenly gives me the superpowers to do it that was previously limited. All right, what am I, you know, am I allowed to do it in those ways now that I that because I have a tool? So the law is going to have to be careful on this and we want to navigate it. Now, my suggestion would be, and this is early, so I could easily mod this suggestion in a couple of months as I see it through because it’s kind of the human dynamics of, you know, protecting the intellectual work of human beings to be able to have the incentive to do it. It’s part of the reason why we we have the those laws in the first place. [Inaudible] I would tend to say that, that you have to kind of disclose that you’re using the tools. You have to be clear that it’s in the style of when you produce data, just like robots dot text of can you put it in a search engine or not? You have to say, can you use this for a training run or not? And, you know, contact me if you want to use it. You know, that kind of stuff, I think is is part of what I think elements of the future probably look like.

Lev-Ram: This is maybe one place where I’m not overly optimistic. Is the law catching up in time? We haven’t seen good examples of that. But maybe we’ll be surprised this time around. Okay, and perfect segue to the audiobook, because clearly you are embracing this. So tell us who’s going to be narrating the audio version of the book.

Hoffman: So one of the internal products that Microsoft has is a incredibly good voice cloning product, which I think they are unlikely to release because they want to be good to all the creators and so forth and they don’t want to have people voice cloning other people. But I went to them and I said, Look, this product is really amazing. I’ve just done this book. Can I use this product to voice clone myself? Because it’s me voice cloning me to do this. So that’s what we’re going to do. And I think it’ll be out pretty soon. We are cross checking it is where, you know, we’re the product alpha test because it’s like, oh, the pronunciation of this unusual name? Not quite right. And there’ll be some of those errors anyway. But you know, it’s it’ll, hopefully it will amaze and delight. 

Murray: By the way that voice cloning thing is, I hate to take a stab in the dark side again, but that is one of the spooky things I heard Nikesh Arora say that someone, you know, some people are doing that voice cloning to trick people into moving money around, you know, to do cyber attacks.

Hoffman: A hundred percent. But by the way, again, when you say, well, A.I. is part of the solution, you have an A.I. assistant running on your phone that says, Wait a minute, are you sure about this? This could be a phishing attack. And that’s part of the reason why I think the good actor is moving faster to build up the defenses. That is definitely you know, one of the like, a whole bunch of cyber hacking is amongst the things, human amplification, amplification of bad humans and bad activity is precisely one of the things that we should be most worried about when we’re talking about the risks. 

Lev-Ram: So what you’re saying is that we’re setting the stage for just this massive battle between good A.I. and bad A.I. That’s what’s going to happen basically.

Hoffman: Or A.I. in the hands of good humans and the A.I. in the hands of bad humans. 

Lev-Ram: Amplification. It all goes back to that. Reid, thank you so much. I feel like we could go on and on we all have so many questions about this. This is the big question for all of us, you know, and I think not only the business world but beyond. So thank you for shedding some light on amplification intelligence. That’s what we’re supposed to call it right?

Lev-Ram: And I can’t wait for the audiobook. I’m going to spend some time with that Hoffman clone.

Hoffman: Yes. And I look forward to your feedback.

Lev-Ram: We’ll tell you which one we like better.

Hoffman: Uh-oh. I might be scared to hear that. But I’d be delighted.

Lev-Ram: Thank you, Reid.

Hoffman: Thank you.

Lev-Ram: Leadership Next is produced by Alexis Haut and edited by Nicole Vergara. Our theme is by Jason Snell. Our executive producer is Megan Arnold. Leadership Next is a production of Fortune Media.

Murray: For even more Fortune content, use the promo code LN25. That’ll get you 25% off our annual subscription at Fortune.com/subscribe.

Leadership Next episodes are produced by Fortune‘s editorial team. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte or its personnel. Nor does Deloitte advocate or endorse any individuals or entities featured on the episodes.

Read The Full Article Here