AI doesn’t have to be this way

Not all technological innovation deserves to be called progress. That’s because some developments, despite the amenities they provide, may not do as much societal advancement, in a balanced way, as advertised. One researcher who stands opposite tech fans is Daron Acemoglu, Massachusetts Institute of Technology. (The “c” in his surname is pronounced like a soft “g.”) IEEE Spectrum He spoke to Agemoglu — whose research areas include labor economics, political economy, and development economics — about his recent work and whether technologies such as artificial intelligence It will have a net positive or negative impact on human society.

IEEE Spectrum: In your working paper in November 2022”Automation and manpowerYou and your co-authors say that the record is, at best, mixed when AI encounters the workforce. What explains the discrepancy between growing demand for skilled labor and employment levels?

AcemogluCompanies often lay off less skilled workers and try to increase hiring of skilled workers.

“Generative AI can be used, not to replace humans, but to be useful to humans. … But that’s not the path it’s on right now.”
—Daron Acemoglu, Massachusetts Institute of Technology

In theory, higher demand and tighter supply should lead to higher prices—in this case, higher salary offers. It makes sense, based on this long-accepted principle, that companies would think “more money, less problems.”

Acemoglu: You might be right to some extent, but… when companies complain about a lack of skills, I think part of it is that they’re complaining about a general lack of skills among the applicants they see.

in your 2021 research paper.”The harms of artificial intelligenceYou argue that if AI remains unregulated, it will cause great harm. Can you give some examples?

AcemogluWell, let me give you two examples gpt chat, which is all the rage nowadays. ChatGPT can be used for many different things. But the current path of the big language model, exemplified by Chat GPT, is very focused on the broad automation agenda. ChatGPT is trying to impress users… What it’s trying to do is try to be as good as humans at a variety of tasks: answering questions, conversing, writing sonnets, writing essays. In fact, in some things, it can be better than humans because writing coherent text is a difficult task and word predictive tools that have to come next, based on a collection of lots of data from the internet, do it justice. Good.

path that gpt3 [the large language model that spawned ChatGPT] It is focus on automation. And there are already other areas where automation has had a detrimental impact — job losses, inequality, and so on. If you think about it you’ll see – or you could argue anyway – that the same structure could have been used for very different things. Generative AI can be used, not to replace humans, but to be useful to humans. If you want to write an article about IEEE SpectrumYou can either go and ask ChatGPT to write this article for you, or you can use it to create a reading list for you that might pick up things you didn’t know yourself that were relevant. The question then would be how reliable the various articles on this reading list are. However, in this capacity, generative AI will be a human complementary tool rather than a human surrogate. But that is not the path he is on now.

“Open up the AI, take a page out of Facebook’s ‘move fast and break things’ codebook, throw everything away. Is that a good thing?”
—Daron Acemoglu, Massachusetts Institute of Technology

Let me give you another example that is more relevant to political discourse. Because, again, the Chat GPT architecture is based on just taking information from the internet that can be obtained for free. And then having a central architecture run by Open AI, has a dilemma: if you just use the internet and use your AI tools to make sentences, you’re very likely to end up with hate speech including racial epithets and misogyny, because the internet So full. So, how does ChatGPT handle that? Well, a bunch of engineers sat down and developed another set of tools, mostly based on reinforcement learning, that allow them to say, “These words won’t be spoken.” This is the central paradigm puzzle. Either he’s spewing hateful things or someone has to decide what’s hateful enough. But this will not lead to any kind of confidence in the political discourse. For it may turn out that three or four engineers – mainly a group of white coats – decide what people can hear on social and political issues. I think hose tools can be used in a more decentralized way, rather than being sponsored by big centralized companies like MicrosoftAnd GoogleAnd AmazonAnd Facebook.

Rather than keep moving quickly and breaking things, innovators should take a more deliberate stance, she says. Are there some specific no’s that should guide the next steps toward intelligent machines?

Acemoglu: Yes. And again, let me give you a demonstration using ChatGPT. They wanted to win Google[to market, understanding that] Some of the technologies were originally developed by Google. Thus, they went ahead and set him free. It is now used by tens of millions of people, but we have no idea what the broader implications of big language paradigms if they are used in this way, how they will affect journalism, middle school English classes, or what the political implications will be. They will own. Google Not my favorite company, but in this case, I guess Google will be more careful. They were actually holding back their great linguistic model. But Open AI, taking a page out of Facebook’s “move fast and break things” codebook, scrapped everything. Is that a good thing? I don’t know. As a result, Open AI has become a multi-billion dollar company. It was always part of Microsoft Indeed, but it is now integrated into the Microsoft ping, while Google It lost nearly $100 billion in value. So, you see the high-risk environment we live in and the incentives that create. I don’t think we can trust companies to act responsibly here without regulation.

Technology companies have asserted that automation will put humans in a supervisory role rather than simply killing all jobs. Robots on the floor and humans in a back room supervising the activities of the machines. But who is to say that the back room isn’t across the perimeter rather than across the wall – a separation that would enable employers to cut labor costs by moving jobs offshore?

Acemoglu: correct. I agree with all of those statements. I would say, in fact, this is the usual excuse of some companies engaged in rapid algorithmic automation. It’s a common refrain. But you’re not going to create 100 million jobs for people to supervise, provide data, and train algorithms. The point of providing data and training is that the algorithm can now do the tasks humans used to do. This is very different from what I call human integration, where the algorithm becomes a tool for humans.

“[Imagine] Using artificial intelligence… for real-time scheduling which may take the form of zero hour contracts. In other words, I employ you, but I have no obligation to provide you with any work..”
—Daron Acemoglu, Massachusetts Institute of Technology

According to Harms of Artificial Intelligence, executives trained in hacking labor costs have used the technology to help, for example, avoid labor laws that benefit workers. Let’s say, scheduling workers’ hourly shifts so that hardly any of them reach the weekly limit of hours that would make them eligible for employer-sponsored health insurance coverage and/or overtime pay.

Acemoglu: Yes, I agree with this statement too. A more significant example is the use of artificial intelligence to monitor workers, and for real-time scheduling which may take the form of zero-hour contracts. In other words, I employ you, but I have no obligation to provide you with any work. You are my employee. I have the right to contact you. And when I call you, you are expected to come. So, let’s say I’m Starbucks. I’ll call and say, “Willy, come at 8 o’clock.” But I don’t have to call you, and if I don’t do it for a week, you won’t make any money that week.

Will the simultaneous spread of artificial intelligence and technologies that enable a surveillance state lead to a complete absence of privacy and anonymity, as portrayed in a science fiction movie? Minority Report?

Acemoglu: Well, I guess it really did. In China, this is exactly the situation in which city dwellers find themselves. And in the US, it’s actually private companies. Google He has a lot of information about you and can monitor you constantly unless you turn off various settings in your phone. It also constantly uses data you leave online, on other apps, or when you use Gmail. So, there is a complete loss of privacy and anonymity. Some people say “Oh, it’s not that bad. These are companies. This isn’t like the Chinese government. But I think that raises a lot of issues with their use of data for individualized and targeted ads. It’s also hard to sell your data to third parties.”

In four years, when my kids are about to graduate from college, how will AI change their career choices?

Acemoglu: This goes back directly to the previous discussion with ChatGPT. Software like GPT3 and GPT4 can screw up a lot of functionality but without massive productivity improvements on their current track. On the other hand, as you mentioned, there are alternative paths that would actually be much better. The developments of artificial intelligence are not destined. It’s not like we know exactly what’s going to happen in the next four years, but it’s about the trajectory. The current track is based on automation. And if that continues, a lot of jobs will be closed to your children. But if the path goes in a different direction, and becomes a human being’s complement, who knows? Perhaps they may have some very meaningful new careers open to them.

of your site articles

Related articles around the web

Source link

Related Posts