Is ChatGPT a “virus released into the wild”? • TechCrunch

[ad_1]

More than three years ago, this ed he sat With Sam Altman for a small event in San Francisco shortly after he left his position as president of Y Combinator to become CEO of the AI ​​firm he co-founded in 2015 with Elon Musk and others, Open AI.

At the time, Altman described OpenAI’s potential in language that seemed strange to some. Altman said, for example, that the opportunity with AI—machine intelligence that can solve problems like a human—is so great that if OpenAI can crack it, the outfit could “perhaps capture the light cone of all future value in the universe.” He said the company “wouldn’t have to release research” because it was so strong. When asked if OpenAI is guilty of fear mongering – Musk has repeatedly called out all organizations working on developing AI Organize Altman talked about his risks Not Think of the “societal consequences” when you “build something on an exponential curve.”

The audience laughed at various points in the conversation, unsure how seriously to take Altman. But no one is laughing now. While machines are not yet as smart as people, they are Technique released by OpenAI since surprised a lot (Including musk), which some critics fear will be a setback for us, especially with the more advanced technology It is said coming thus.

In fact, though heavy users insist on it Not smartlyThe chat The model that OpenAI made available to the general public last week is very capable of answering qAs someone professionals across a range of industries are trying to address the implications. Educators wonder, for example, how they will be able to distinguish between the original writing and the algorithmically generated articles that he is bound to receive — and can avoid doing so. Anti-plagiarism software.

Paul Kidrowski is not a teacher in his own right. An economist, venture capitalist, and fellow at the Massachusetts Institute of Technology, he describes himself as “a frustrated naturalist with a tendency to consider risks and unintended consequences in complex systems.” But he is among those who are suddenly concerned about our collective future, Twitter yesterday: “[S]Shame on OpenAI for unfettering this pocket nuke bomb into an unprepared community.” Kedrosky wrote, “Obviously, I feel ChatGPT (and its ilk) should be pulled immediately. And if it is introduced again, only with tight restrictions.

We spoke to him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is “the most disruptive change the American economy has seen in 100 years,” and not in a good way.

Our conversation has been edited for length and clarity.

TC: ChatGPT was released last Wednesday. What triggered your reaction on Twitter?

BK: I’ve played with these conversational user interfaces and AI services in the past, and this is obviously a huge leap beyond that. What particularly bothered me here was its occasional brutality, with disastrous consequences for a range of different activities. It’s not just the obvious stuff, like essay writing in high school, but across almost any field where there are rules – [meaning] An organized way to express yourself. It could be software engineering, high school essays, legal documents. They are all easily eaten by this voracious beast and tossed back without compensation for whatever was used to train it.

I heard from a colleague at UCLA who told me that they have no idea what to do with the essays at the end of the current semester, getting hundreds per course and thousands per section, because they no longer have a clue what is fake and what isn’t. So doing it casually – as someone told me earlier today – reminds us of the so-called [ethical] A white hat hacker who finds a bug in a widely used product then informs the developer before the wider public knows so the developer can patch their products and we don’t have mass destruction and power grids going down. This is the opposite, where a virus was released into the wild without worry of consequences.

Looks like it could devour the world.

Some might say, “Well, did you feel the same way when automation reached the auto factories and auto workers were put out of work?” Because that’s kind of a broader phenomenon. But that’s very different. These specific learning techniques are self-motivating; they learn from requests. So the robots in a manufacturing plant, while they do create incredible economic consequences for the people who work there, they don’t then turn around and start absorbing everything that’s going on inside the factory, and they move through sector by sector, while that’s not exactly what We can expect it but what you should expect.

Musk has partially left OpenAI differences About the company’s development, he said in 2019, and he has been talking about artificial intelligence as an existential threat for a long time. But people hated it I don’t know what he’s talking about. We are now facing this powerful technology and it is not clear who is intervening to address it.

I think it’s going to start in a bunch of places at once, most of which are going to look really indecent, and people are going to look [then] Ironic because that’s what techies do. But too bad, because we got into this by creating something with such a dependency. And in the same way the FTC has been asking people who run blogs for years [make clear they] You have affiliate links and make money from them, and I think on a petty level, people would be forced to disclose anything like that. All of this is generated by the machine.

I also think we’ll see new energy for Ongoing lawsuit v. Microsoft and OpenAI for copyright infringement in the context of machine learning algorithms in training. I think there will be a broader DMCA issue here with regard to this service.

I think there is potential for an extension [massive] The lawsuit and eventual settlement as to the consequences of the services, which, as you know, is likely to take a long time and won’t help enough people, but I don’t see how we don’t end up in [this place] regarding these technologies.

What is MIT thinking about?

Andy McAfee And his group there is more optimistic and they have a more traditional view that any time we see disruption, other opportunities are created, people are mobile, moving from place to place and occupation to occupation, and we shouldn’t be. We are persuaded to believe that it is this particular development of technology that we cannot turn around. And I think this is generally true.

But the lesson of the past five years in particular is that these changes can take a long time. Free trade, for example, is one of those incredibly devastating experiences on an economy level, and we all as economists have told ourselves that the economy is going to adjust, and people in general are going to benefit from lower prices. What no one expected was that someone would organize all the angry people and elect Donald Trump. So there is this idea that we can anticipate and predict consequences, but [we can’t].

I’ve talked about essay writing in high school and college. One of our children has already asked – in theory! – If it is plagiarism, use ChatGPT to author a search.

The purpose of writing an essay is to demonstrate your ability to think, so this shortens the process and defeats the purpose. Again, in terms of consequences and external factors, if we can’t allow people to assign homework because we no longer know if they’re cheating or not, that means everything has to happen in class and has to be supervised. There can’t be anything to take home. More things must be done verbally, and what does that mean? This means that school has become a lot more expensive, a lot more professional, a lot smaller and on time we are trying to do the opposite. The consequences for higher education are devastating in terms of actually providing the service any longer.

What do you think about the idea of ​​universal basic income, or enabling everyone to participate in gains Who is Amnesty International?

I am a much less staunch supporter than I was before COVID. And the reason is because COVID, in a sense, has been an experiment with UBI. We paid people to stay home, and they came with QAnon. So I’m really nervous about what happens when people don’t have to hop in the car, drive somewhere, do a job they hate and go home again, because the devil finds work for idle hands, and there will be a lot of idle hands and a lot of devils.



[ad_2]

Source link

Related Posts

Precaliga