The eSafety commissioner will abandon its legal case against Elon Musk's X to have graphic footage of a terrorist stabbing removed from the social media platform.
Yeah I was a bit torn on this, initially I was on the government’s side because I don’t like it when big multinationals try to fuck us around (like Valve). But like the news thing a couple of years ago I have to side with the tech giant.
If the government gets to decide what we can and can’t see then I think democracy is in trouble.
True, they do a lot of this under the guise of copyright enforcement as well (which you can change your dns to fix generally). I don’t understand how this censorship is any different from what we look down upon authoritarian countries. I like the idea of a free and open web
I don’t understand how this censorship is any different from what we look down upon authoritarian countries.
The scope and nature of the content being censored, I guess. But you’re right that there is the potential of setting a dangerous precedent when taking this approach to online safety regulation. I think in general the saga has highlighted the problematic nature of social media becoming so intertwined with society. There is a real risk for this stuff to be viewed unintentionally, or because it was recommended through an algorithmic feed, and served to a considerably larger number of people than if it was only available on LiveLeak or something back in the day. It’s so difficult to effectively regulate these social media companies now because they have become part of mainstream society and gained so much power as a result. We are essentially just relying on goodwill on the part of the people running them.
But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.
I might sound hypocritical as a mod of a few communities on here who has removed a few comments that don’t meet our standards, but comments on Lemmy aren’t truly removed (unless an admin purges it) and can be viewed in the modlog (or with a client that doesn’t respect the condition when a comment has been removed, there’s still quite a few where this is the case).
But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.
I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.
I think democracy is already in trouble with giant mega corps controlling what we say instead.
the risk of the government controlling it is that someone might use that for their own interest. that’s the natural and inherent state of corporate interests. i think I’d rather have a chance at meaningful regulation than hand these powers over to whoever happens to have the most money right now like we’re currently doing.
it’s a hard problem. the great American political experiment of free speech absolutism is crumpling in the face of New technologies that amplify lies to drown out any discourse from “the marketplace of ideas”. I don’t know what should be done about that. the only government we have in America was bought and sold in the 70s. right now if America tried to regulate speech in any way it would just be a new name to the same monied interests that control what can be said on these platforms now.
Yeah I was a bit torn on this, initially I was on the government’s side because I don’t like it when big multinationals try to fuck us around (like Valve). But like the news thing a couple of years ago I have to side with the tech giant.
If the government gets to decide what we can and can’t see then I think democracy is in trouble.
The government already does that to a large extent. The content in question is not viewable from within Australia unless you use a VPN.
True, they do a lot of this under the guise of copyright enforcement as well (which you can change your dns to fix generally). I don’t understand how this censorship is any different from what we look down upon authoritarian countries. I like the idea of a free and open web
The scope and nature of the content being censored, I guess. But you’re right that there is the potential of setting a dangerous precedent when taking this approach to online safety regulation. I think in general the saga has highlighted the problematic nature of social media becoming so intertwined with society. There is a real risk for this stuff to be viewed unintentionally, or because it was recommended through an algorithmic feed, and served to a considerably larger number of people than if it was only available on LiveLeak or something back in the day. It’s so difficult to effectively regulate these social media companies now because they have become part of mainstream society and gained so much power as a result. We are essentially just relying on goodwill on the part of the people running them.
But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.
I might sound hypocritical as a mod of a few communities on here who has removed a few comments that don’t meet our standards, but comments on Lemmy aren’t truly removed (unless an admin purges it) and can be viewed in the modlog (or with a client that doesn’t respect the condition when a comment has been removed, there’s still quite a few where this is the case).
I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.
I think democracy is already in trouble with giant mega corps controlling what we say instead.
the risk of the government controlling it is that someone might use that for their own interest. that’s the natural and inherent state of corporate interests. i think I’d rather have a chance at meaningful regulation than hand these powers over to whoever happens to have the most money right now like we’re currently doing.
it’s a hard problem. the great American political experiment of free speech absolutism is crumpling in the face of New technologies that amplify lies to drown out any discourse from “the marketplace of ideas”. I don’t know what should be done about that. the only government we have in America was bought and sold in the 70s. right now if America tried to regulate speech in any way it would just be a new name to the same monied interests that control what can be said on these platforms now.
idk, were probably just fucked.