Should the bar for AI necessarily higher? Fatal accidents with Teslas on autopilot, also a kind of AI, are widely reported, but 9 out of 10 times it turns out to ‘ordinary’ accidents . Just how many road deaths (nearly 43,000 a year in the US alone ) could we prevent if we eliminated all those inattentive, highly flammable and sometimes intoxicated drivers? I think with generative AI it’s the same. In most cases, using it will result in a tter text, image or video than if people had to do it alone.
That will also lower the threshold to
Create content. Especially good when it comes to colleagues who can share their knowledge and ideas more easily. generate masses of fake news, but is that due to the AI? In my opinion, the (human) senders of content should always fully responsible for the content. Who am I speaking to? The Botswana Email List most important thing here is accountability, as I wrote earlier in my article on AI policy and governance . In my opinion, the (human) senders of content should always fully responsible for the content. You can then address, punish or unfollow them if they have not properly checked AI-generated content.
Some regulators want a general
Obligation to transparent about the use of generative AI. For example, the EU has just voted for an AI Act with strict rules and hopes that it will come a global standard, just like the GDPR privacy legislation. You must then always indicate to the user if content has en created by AI. But how do B2C Lead you define that? There is a large gray area tween an AI chatbot and a creative human writing. What if people use AI for drafts , which they then edit themselves? Or use an AI-driven search engine for inputs? Then I think AI is a tool and not fundamentally different from a spell checker.