It's a horrible idea to put an LLM anywhere near driving decisions. Recent research suggests that attempts to "align" them with human values using RLHF only converted overt bias into stronger covert prejudice. A voice-enabled chatbot in my car to help me talk through work or personal questions during long drives might be fine.

What happens when ChatGPT tries to solve 50,000 trolley problems? | Ars Technica

arstechnica.com/ai/2024/03/wou

Follow

@Transportist maybe if you could train something BERT-like from scratch on such data, or finetune a model like starcoder which hasn't been pretrained on vast swaths of internet text awash with human prejudice. but either of those would be antithetical to the cult of AGI...

Sign in to participate in the conversation
transportation.social

A Mastodon instance for transportation professionals!