Duolingo cuts workers as it relies more on AI#

What’s this?

This is a summary of Wednesday 31st January’s Data Ethics Club discussion, where we spoke and wrote about the Washington Post article Duolingo cuts workers as it relies more on AI by Gerrit de Vynck. The summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara Natalie Thurlby, Vanessa Hanschke, Amy Joint and Huw Day helped with the final edit.

The Washington Post article covered the news that in 2023, 10% of Duolingo’s contractors had their jobs terminated. These were primarily translators who were writing content for lessons across the language learning site’s programmes, and although Duolingo has said that employees wouldn’t be replaced by AI, that instead AI is being used to improve productivity and efficiency, and that no fully AI-generated sentences have been deployed on the app, previous employees have noted a drop in quality of the content released since last year. Contractors who have been kept on have seen their roles shift to reviewing content rather than creating content for lessons - similarly, it has been found worldwide that the number of adverts for freelance “automation-prone” roles fell by 21% after the introduction of ChatGPT.

Below are some key points discussed during the Data Ethics Club session:

Are there any circumstances where it could be considered ethical to lay off a small proportion of employees within a company for the benefit of the wider organisation?#

It is important to think about what the differences are between a “normal” redundancy process and the replacement of human employees with AI, which might not necessarily be up for the task. In the case of AI, the intention seems to be to save money and inflate profits, not improve the quality of service. However, this might not be unusual; we see a pattern of job loss repeated every time a new technology comes through (e.g. cars, the printing press).

We would like to think that it is never ethical to lay off employees, however, the issue is not clear cut. On one hand, layoffs are to benefit the company not to the employees - costs are reduced, and stock price goes up. On the other hand, if nobody is laid off, benefits and wages might have to be slashed because the company has less money. For example, in some consultancies all staff took a pay cut during the pandemic. We aren’t sure if this is a better alternative, and wondered if wages tend to return to their prior rates.

Reduction in wages or threat of redundancy might require people to upskill in order to keep their value as employees. We wondered how much personal responsibility people have to upskill, or to know what they should be upskilling in. Good managers would be on top of this, thinking about continuity and the development of their employees as a part of the development of the company. We feel most fulfilled in environments where we are supported and guided to improve, and this pays out in the quality of our work.

Legislation and guidance around AI regulation such as the UK’s National AI strategy focuses on the creation of new jobs to integrate AI, rather than the protection of existing ones - what more should be done at this level and at a corporate level to support this?#

This type of regulation risks encouraging managers to use AI rather than employ humans. Currently, these decisions are not made transparently. We would like to see the introduction of processes which companies have to go through to obtain permission to replace human employees to AI. Employees should have more say in the continuation of their jobs; it is worrying that certain types of jobs may be made unavailable. It would be more acceptable if people are aware and engaged in the process. To take these jobs away could mean that those working in them do not have another job or livelihood; this situation reminds us of what happened to the mining industry.

Retaining jobs for people can be aided by implementing AI as an assistive mechanism, rather than a complete replacement for a human employee. Rather than replacing a “full worker”, AI can cover some of the responsibilities of multiple different people. A subset of tasks is removed, instead of an entire role. This means that the job doesn’t vanish, but tasks are merged and responsibilities combined. AI just becomes another tool we are using.

Whether a job is replaced by AI or just updated might depend on what the job is. For instance, if the job is mainly content generation, this could easily be replaced by AI. If the human role becomes checking the content which AI produced, this arguably becomes a completely new and different job. In the case of Duolingo, rather than creatively designing lessons, the job becomes prompt engineering. We had some quality concerns with how AI replaces human input. Skilled workers join companies because they have abilities which they have honed and want to do a good job. However, tools are introduced to do these tasks which often perform more poorly than the skilled worker.

On the other hand, prompt engineering is a skill in itself, and the combination of human and AI could result in something much better than a human on their own. Perhaps the issue is actually just semantics; we might compare it to the transition of editors from typewriters to desktop computers. It is also important to consider that changes in job structure might not just be because of AI – environmental sustainability requirements will have an effect.

What other job roles are at risk of content creation professionals being replaced by reliance on AI chatbots? Should companies have to be transparent publicly about their use of AI?#

There is an interest in using AI for mental health chatbots; Eliza, one of the first chatbots, was an attempt to simulate a psychotherapist. This might be appropriate, for example, as a space for people to talk about problems they wish to be confidential. On the other hand, people find AI counsellors “weird” suggesting they might not be up for the task.

Transparency is essential for chatbots; it must be clear whether you are interacting with a human or with AI. What happens with the data from these transactions must also be transparent. At Crisis Text Line, data has been used to improve the emphatic ability of customer service company Loris AI. This started with the intention of creating empathy training videos, and over the course of its first year morphed into customer service selling tech. There are political issues, exemplified at the National Eating Disorder Association (NEDA) where AI was used to replace workers at an eating disorder helpline, four days after they unionised.

Like the change to the mining industry, it’s likely that certain jobs will drastically change or reduce, probably involving layoffs. One way these changes might be reflected is through a pay structure where idea generation is cheap, and structural understanding is more highly valued. Proofreading is a very likely candidate for this. Short sighted approaches might look like staff cuts and focus on increasing output, paying less attention to decline in quality. However, jobs like proofreading are very suited to some people, such as those with mobility issues or new parents, and we need to think about how they can be compensated for. It is important to consider ways to save money which don’t involve cutting employees.

For anyone who uses Duolingo, have you noticed any difference in the quality of the phrases and lessons in the last six months or so? Are there other apps, products and services you’ve noticed dip in quality due to AI?#

Duolingo might get away with a reduction in quality because they don’t have many fluent users. We found it disheartening that we might not notice a drop Duolingo quality because we don’t understand the languages well enough. In other areas, the Scottish Government once used Google Translate to turn “Happy Burns Night” into Gaelic. They made a bunch of government-branded images for social media that wished people a happy what-happens-to-your-skin-when-you-touch-fire night, rather than the poet.

Attendees#

  • Nina Di Cara, Research Associate, University of Bristol, ninadicara, @ninadicara :flag-wales:

  • Huw Day, Data Scientist, Jean Golding Institute, @disco_huw, :de: :ru: (on a 562 day streak btw)

  • Vanessa Hanschke, PhD student, University of Bristol, :flag-id: :fr: :flag-sa:

  • Amy Joint, freerange publisher, @AmyJointSci :flag-wales:

  • Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet, :flag-scotland: (Gaelic)

  • Robin Dasler, data product manager on hiatus, daslerr, :fr: :de: (sort of)

  • Virginia Scarlett, Open Data Specialist, HHMI Janelia Research Campus :flag-mx:

  • Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane :flag-fi:

  • Harry Milnes, Data Scientist, Department for Energy Security & Net Zero :flag-pl::flag-de::flag-es:

  • Rachael Laidlaw, PhD Student, University of Bristol :flag-it: :flag-gr:

  • Ushnish Sengupta, Assistant Professor, Algoma university :flag-ca:

  • Noshin Mohamed, Service Manager for Quality Assurance in Children’s Services