As if I don't have enough reasons to not trust software AND the large corporations who own them- >
“If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “In summary, we advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.”
What a novel idea - focusing on ensuring the technology actually helps people, rather than manipulates and controls them.
No comments:
Post a Comment