Philosophical Productive Discussion
In which all intrinsic human problems are solved permanently forever for all time
#31
(09-24-2023, 10:40 AM)Eric Cartman wrote: Unlike Benji, I'd actually be sorta okay with some kind of AI benevolent dictator central planning running everything 100% impartially for the greater good, but I don't think it'd be very long before the law of unintended consequences kicks in and things we as humans would find intolerable would just get rolled out as the best option - limiting families to only a certain permitted number of children, forced mass euthanasia at certain thresholds for economic reasons, etc

[Image: ignorance_is_bliss_matrix.gif]
Like Reply
#32
(09-24-2023, 10:55 AM)Eric Cartman wrote:
(09-24-2023, 10:40 AM)Eric Cartman wrote: Unlike Benji, I'd actually be sorta okay with some kind of AI benevolent dictator central planning running everything 100% impartially for the greater good, but I don't think it'd be very long before the law of unintended consequences kicks in and things we as humans would find intolerable would just get rolled out as the best option - limiting families to only a certain permitted number of children, forced mass euthanasia at certain thresholds for economic reasons, etc

[Image: ignorance_is_bliss_matrix.gif]

Get ourselves a Zeroth law of robotics alongside a real FTL travel solution, and I say bring it.
Like Reply
#33
(09-24-2023, 10:40 AM)Eric Cartman wrote: Unlike Benji, I'd actually be sorta okay with some kind of AI benevolent dictator central planning running everything 100% impartially for the greater good
My (our since I didn't invent it) contention is that this is literally impossible. I don't mean literally as in figuratively and I'm doing that thing where I use other people's rhetoric, I mean literally as in literally.

Tons of people in the 20th Century thought it was easily done and so tried to carry it out, you may know them as the communists and the fascists. For the rest of the West, we thankfully have enough respect for the concept of the individual that it has been only mostly harmful rather than totally so. Unfortunately, this idea just won't go away no matter how many people have to die and suffer to provide empirical support for the logical proof.

Spoiler:  (click to show)
Like Reply
#34
To my mind the conceptual issues with central planning that have always resulted in failure and where a hypothetical benevolent dictator AI could potentially not fail are threefold;

  1. Bureaucratic inefficiency means by the time something is happening that needs fixing, it might already be too late
  2. Oversight by political appointees not by expertise means - for ostensibly sensible pragmatic reasons - you put loyalist yes men in charge of important systems, not necessarily the people who understand those systems best
  3. Endemic corruption of increasing magnitude where you line your pockets with a little bit of surplus because no one will notice, you give your moron brother in law a cushy job to get your wife off your back, you take a small kickback to not investigate the shipping containers too closely, you take unofficial 'bid tenders' from people who really don't want latrine duty when dispensing work lists, etc etc etc.

So of those, an AI could theoretically eliminate bureaucratic inefficiencies, because it would be capable of - for all intents and purposes - real-time data collection and wouldn't need to wait for someone to file a report because it would be continuously filing its own reports to itself. It also obviously wouldn't put it off because its bad news, couldn't be fucked because its a long report and its friday afternoon, or all the other ways information in a system gets delayed.

It also wouldn't be a suck-up idiot that doesn't really understand the job its doing - it would pretty much be a pre-eminent expert on whatever its doing, because it would have access to all available research at any given state of peer review or publication the moment anything was entered into its database. It could not only be better than any human in terms of available knowledge, it would be more capable than any human physically could be (this is the whole concept of the singularity where AI can basically solve everything because its effectively omniscient).

An AI that isn't sentient would also not be affected by corruption, because it would not have any pride, greed, lust, fear etc to skew its decision making.
1 user liked this post: Tucker's Law
Like Reply
#35
AI dictator is an absolutely shit idea for many reasons, chief among which is we don't need one and never will

Caesar said it is easier to find men who are willing to volunteer to die than to find those who will endure pain with patience. Once that is abstracted out to a 1s and 0s decision tree the outcome is, to my mind, predictable and obvious.
Like Reply
#36
(09-25-2023, 08:43 AM)Besticus Maximus wrote: AI dictator is an absolutely shit idea for many reasons, chief among which is we don't need one and never will

Caesar said it is easier to find men who are willing to volunteer to die than to find those who will endure pain with patience. Once that is abstracted out to a 1s and 0s decision tree the outcome is, to my mind, predictable and obvious.

I mean, you say that, but I think the biggest risk of an AI benevolent dictator would be the unpredictable consequences, like it deciding theres plenty of good protein going to waste in crematoriums
Like Reply
#37
(09-25-2023, 09:07 AM)Eric Cartman wrote:
(09-25-2023, 08:43 AM)Besticus Maximus wrote: AI dictator is an absolutely shit idea for many reasons, chief among which is we don't need one and never will

Caesar said it is easier to find men who are willing to volunteer to die than to find those who will endure pain with patience. Once that is abstracted out to a 1s and 0s decision tree the outcome is, to my mind, predictable and obvious.

I mean, you say that, but I think the biggest risk of an AI benevolent dictator would be the unpredictable consequences, like it deciding theres plenty of good protein going to waste in crematoriums


The most plausible imagining of it that I've encountered is Hyperion.

What you've said there though...You should read a novel called Tender is the Flesh.
Like Reply
#38
(09-25-2023, 09:07 AM)Eric Cartman wrote:
(09-25-2023, 08:43 AM)Besticus Maximus wrote: AI dictator is an absolutely shit idea for many reasons, chief among which is we don't need one and never will

Caesar said it is easier to find men who are willing to volunteer to die than to find those who will endure pain with patience. Once that is abstracted out to a 1s and 0s decision tree the outcome is, to my mind, predictable and obvious.

I mean, you say that, but I think the biggest risk of an AI benevolent dictator would be the unpredictable consequences, like it deciding theres plenty of good protein going to waste in crematoriums

or if we ended up with an AI that was fostered from birth by an american billionaire and it decided that the whole world should share the wonders of an insurance based health care system 

but if we gave the keys to the office to an ai that retarded we'd only be able to blame ourselves  Elon
Like Reply
#39
Think we’ve already seen the trouble with AI in recent times. If you feed it pure data, stats, et al. Well, the results are…problematic. Not talking 2023 USA standards problematic. Stuff that’d make Nixon cringe. Once you filter it, not only is it compromised and corrupted, the efficacy gets fucked up. Either way, it’d immediately recommend culling large swaths of the population. Difference is which people.

Now I know some are cool with one outcome or the other. Me personally? I’d rather an unbiased lottery for who the machine overlord sends to the bio mineral reclamation pit.
Like Reply
#40
(09-25-2023, 08:38 AM)Eric Cartman wrote: So of those, an AI could theoretically eliminate bureaucratic inefficiencies, because it would be capable of - for all intents and purposes - real-time data collection and wouldn't need to wait for someone to file a report because it would be continuously filing its own reports to itself. It also obviously wouldn't put it off because its bad news, couldn't be fucked because its a long report and its friday afternoon, or all the other ways information in a system gets delayed.

It also wouldn't be a suck-up idiot that doesn't really understand the job its doing - it would pretty much be a pre-eminent expert on whatever its doing, because it would have access to all available research at any given state of peer review or publication the moment anything was entered into its database. It could not only be better than any human in terms of available knowledge, it would be more capable than any human physically could be (this is the whole concept of the singularity where AI can basically solve everything because its effectively omniscient).
It wouldn't be and this is the central problem with central planning: that data essentially can't be acquired and used. Human wants are relative, ranked and do not use consistent values. Aggregating them for millions or billions makes the data even more useless. Even updating instantly the data is always old and can never tell you about the future. There's no place for risk/reward because everything would be already accounted for consumption, that's the entire point. This is assuming that it can still make decisions about necessary resources when there's no values so "efficiency" and "waste" can't even be determined; the plan may call for some goods to be produced when none of the materials necessary to produce those goods are available either because they don't exist or have been used up for something else. (Not to mention labor distribution if nothing can be produced by some.) Maybe a better and less wasteful use for them hasn't happened yet and never will because the plan doesn't account for it. That access to all available research assumes that research was planned for in the first place, how is the central plan to value potential research into something that doesn't exist? And how is it supposed to value that relative to the pre-existing demands for resources in the central plan of which there are no excess resources? If you write the plan to deliberately create excess that can be used to speculate by the elite, AI or class, then you basically just reinvented feudalism without the noble's duties. The constant trial and error sped up by instantaneous data would still require the constant reworking of the entire plan which effects every component of it everywhere.

I mean, yes, of course, if we assume the AI could hypothetically keep track of everyone's values and was fast enough it could potentially manage this but 1.) it would depart from central planning into resource distribution (which is what the Soviet Union and China both essentially turned to and then rewrote the Five Year Plan based on what had actually been done), 2.) would still have the problem of the future and 3.) we have a much more energy efficient solution that doesn't require V-Ger (and might spawn The Borg) already.
Like Reply
#41
(09-25-2023, 09:39 PM)benji wrote:
(09-25-2023, 08:38 AM)Eric Cartman wrote: So of those, an AI could theoretically eliminate bureaucratic inefficiencies, because it would be capable of - for all intents and purposes - real-time data collection and wouldn't need to wait for someone to file a report because it would be continuously filing its own reports to itself. It also obviously wouldn't put it off because its bad news, couldn't be fucked because its a long report and its friday afternoon, or all the other ways information in a system gets delayed.

It also wouldn't be a suck-up idiot that doesn't really understand the job its doing - it would pretty much be a pre-eminent expert on whatever its doing, because it would have access to all available research at any given state of peer review or publication the moment anything was entered into its database. It could not only be better than any human in terms of available knowledge, it would be more capable than any human physically could be (this is the whole concept of the singularity where AI can basically solve everything because its effectively omniscient).
It wouldn't be and this is the central problem with central planning: that data essentially can't be acquired and used. Human wants are relative, ranked and do not use consistent values. Aggregating them for millions or billions makes the data even more useless. Even updating instantly the data is always old and can never tell you about the future. There's no place for risk/reward because everything would be already accounted for consumption, that's the entire point. This is assuming that it can still make decisions about necessary resources when there's no values so "efficiency" and "waste" can't even be determined; the plan may call for some goods to be produced when none of the materials necessary to produce those goods are available either because they don't exist or have been used up for something else. (Not to mention labor distribution if nothing can be produced by some.) Maybe a better and less wasteful use for them hasn't happened yet and never will because the plan doesn't account for it. That access to all available research assumes that research was planned for in the first place, how is the central plan to value potential research into something that doesn't exist? And how is it supposed to value that relative to the pre-existing demands for resources in the central plan of which there are no excess resources? If you write the plan to deliberately create excess that can be used to speculate by the elite, AI or class, then you basically just reinvented feudalism without the noble's duties. The constant trial and error sped up by instantaneous data would still require the constant reworking of the entire plan which effects every component of it everywhere.

I mean, yes, of course, if we assume the AI could hypothetically keep track of everyone's values and was fast enough it could potentially manage this but 1.) it would depart from central planning into resource distribution (which is what the Soviet Union and China both essentially turned to and then rewrote the Five Year Plan based on what had actually been done), 2.) would still have the problem of the future and 3.) we have a much more energy efficient solution that doesn't require V-Ger (and might spawn The Borg) already.


Dunno if you've ever watched any of Adam Curtis' output but this is his general pitch; governments have failed in the attempt to capture the physcial world in data but they persist with the delusion that they're accurately measuring it. He brings it back to the RAND corporation in Vietnam.
Like Reply
#42
[Image: 80h9rl.jpg]
Like Reply
#43
(01-19-2024, 11:01 AM)PogiJones wrote: "suspension of rights" is a loaded term, which I'm sure you're aware, because the whole point of the constitutional analysis is to determine whether the rights are there, not whether to "suspend" them or not.
You are arguing that all of my First Amendment protections (among others) can be suspended whenever the government wants on private property due to the hypothetical publication of completely legal content. That to utilize my rights protected by the First Amendment can be conditioned on my agreeing to government pre-emption of all speech. That nobody should be allowed to access the Internet without state approval.

I find this to be an absurd reading of American legal principles and government powers.
Like Reply
#44
I find that to be an absurd reading of what I've said. I am not arguing that. I am giving you the reality of what the courts have said:

https://www.law.cornell.edu/constitution-conan/amendment-1/content-neutral-laws-burdening-speech wrote:A series of cases allowing speech to be regulated due to its “secondary effects” is related to these content-neutral standards.4 In Young v. American Mini Theater, the Court recognized a municipality’s authority to zone land to prevent deterioration of urban areas, upholding an ordinance providing that adult theaters showing motion pictures that depicted specified sexual activities or specified anatomical areas could not be located within 100 feet of any two other establishments included within the ordinance or within 500 feet of a residential area.5 The Court endorsed this approach in Renton v. Playtime Theatres, rejecting a constitutional challenge to a zoning ordinance restricting the locations of adult theaters after concluding that although the ordinance targeted businesses selling sexually explicit materials, the law was content-neutral because it was justified by studies showing adult theaters produced undesirable secondary effects, rather than being justified by reference to the content of the regulated speech.6 By contrast, for example, the Court rejected one city’s argument that it could prohibit as a nuisance “any movie containing nudity which is visible from a public place.” 7 Concluding that the ordinance was not well tailored to the city’s stated goals of protecting the privacy interests of passers-by or protecting children, the Court held instead that the law was an unconstitutional content-based regulation.8

It's on private property, within the confines of their private property, and yet the court upheld it, even upholding it as content-neutral (which I find dubious myself, tbh).

https://www.law.cornell.edu/supremecourt/text/366/36 wrote:Throughout its history this Court has consistently recognized at least two ways in which constitutionally protected freedom of speech is narrower than an unlimited license to talk. On the one hand, certain forms of speech, or speech in certain contexts, has been considered outside the scope of constitutional protection. . . . On the other hand, general regulatory statutes, not intended to control the content of speech but incidentally limiting its unfettered exercise, have not been regarded as the type of law the First or Fourteenth Amendment forbade Congress or the States to pass, when they have been found justified by subordinating valid governmental interests, a prerequisite to constitutionality which has necessarily involved a weighing of the governmental interest involved.

The "narrowing" of freedom of speech is a reality of our courts. If you want to reword the "narrowing" to "suspension," that's up to you, but I'm telling you what the courts say. Their constitutional analysis is there to determine whether something IS a right, not whether to suspend it. When they say "unprotected speech," they're saying you don't have that right. You're claiming you do, and they're taking it from you. That's fine, but then me telling you what they say is not me arguing that your rights should be suspended. It's me telling you "the courts said you don't have this right. This right you think you have doesn't exist according to the courts."

I will give you this: I think the fact that the gov't can overcome a content-based free speech claim by using the Strict Scrutiny standard does make your choice to use "suspend" pretty compelling in those circumstances. The court is saying, "Okay, gov't, if you have a really, REALLY good reason, we'll allow this restriction." That's pretty close to a suspension, I agree with you there (although suspension often means temporary, which this isn't). It's one thing to say, "This speech doesn't count as protected," and another to say, "The speech would normally be protected, but by golly, that's a really good reason." I'm not saying the latter is necessarily wrong, but I think it does fit your accusation of a "suspension of rights" better.
Like Reply
#45
I really can't see how it's anything but a suspension of the First Amendment to require state approval to engage in legal speech on private property. It's "narrowing" the right into complete non-existence.
Like Reply
#46
The zoning decision I quoted was requiring state approval to engage in legal speech on private property. You don't like their decision, that's fine, take it up with them. That's what they're doing. I'm just the messenger.
Like Reply
#47
If everyone has a gun no-one will comitt crime, because everyone would have a gun.
Like Reply
#48
(01-19-2024, 01:06 PM)PogiJones wrote: The zoning decision I quoted was requiring state approval to engage in legal speech on private property. You don't like their decision, that's fine, take it up with them. That's what they're doing. I'm just the messenger.
Your citation does not require every person who wishes to view any motion picture anywhere to get state approval to do so. That's what a law requiring you to confirm your identity with the state before being allowed to view any website would be equivalent to.

edit: In fact the very next part of what you cited explains how an attempt to even limit certain websites would not be content-neutral as you're trying to argue.
Like Reply
#49
You guys need to state your premise before quarantining into the Deep Dive Debate thread wag

The premise (please adjust if needed to focus discussion where possible to the underlying debate rather than the cited examples or imperfection of analogies):
"'Social Media' has caused demonstrable harm to society, but as a paradigm of literally free speech, should go largely unregulated despite that harm".

Is this a fair summary of the topic at hand to deep dive into?
Like Reply
#50
(01-19-2024, 01:47 PM)benji wrote: Your citation does not require every person who wishes to view any motion picture anywhere to get state approval to do so. That's what a law requiring you to confirm your identity with the state before being allowed to view any website would be equivalent to.
That's a fair distinction to make. I'd point out that some states already require this of porn sites, but in terms of distinguishing this from the zoning example, that is a good distinction. I mentioned the zoning example because it covered all the criteria you had listed, but yes, as you add more criteria, it does become distinct.

Quote:edit: In fact the very next part of what you cited explains how an attempt to even limit certain websites would not be content-neutral as you're trying to argue.
I'm not sure what you're referring to. Are you talking about the publicly visible nude movies part? That's content-based. Or are you referring to something else?

(01-19-2024, 02:24 PM)Eric Cartman wrote: You guys need to state your premise before quarantining into the Deep Dive Debate thread wag

The premise (please adjust if needed to focus discussion where possible to the underlying debate rather than the cited examples or imperfection of analogies):
"'Social Media' has caused demonstrable harm to society, but as a paradigm of literally free speech, should go largely unregulated despite that harm".

Is this a fair summary of the topic at hand to deep dive into?

The topic is whether the Florida law restricting social media to ages 16+ is constitutional. More specifically, whether it would be evaluated as a content-based speech restriction (requiring a "strict scrutiny" burden for the gov't to prove) or a content-neutral speech restriction (requiring an "intermediate scrutiny" burden for the gov't to prove).
Like Reply
#51
My opening gambit:

First we need to be more specific with our terms.
'Social Media' is a fairly nebulous term that basically just redefines how we use new technologies to communicate with each other.
Within this concept, there are broadly two types of category; didactive (as in 'influencers' tell you things and have control over what - if any - pushback the ideas they promote receive) and cooperative (as in group conversations, where what - if any - pushback to the ideas promoted is largely decided by the group, and any excessive or overt attempts to control 'the narrative' will lead to splinter groups forming).

Of the first category, some regulation already exists; if your Tik Tok channel 'She Hulk Is Excellent Actually' does not disclose it is wholly endorsed as a paid promotion by Disney® you are already opening yourself to potential lawsuits / fines / penalties.

Of the second category, 'community guidelines' will always see anything too far beyond the group consensus of standards getting whistle blown anyway, because someone always has an axe to grind. In terms of regulation, there should not need to be more than a fairly light touch threat of further penalties for any group going too far, because someone in that group will snitch it out to the authorities for some reason sooner or later.

I'll address the special case of people unable to critically discern truth from bullshit particular susceptibility to social media (eg the demographic who believe a fat bearded man flies round the world dropping presents down chimneys) later, assuming the premise of debate is accurate.

I'm not going to address the sheer ludicrous of the idea anyone even could govern and regulate all social media in a manner that would never cause any harm to anyone though.
Like Reply
#52
This post is awaiting approval by local content governance
Like Reply
#53
(01-19-2024, 02:34 PM)PogiJones wrote: The topic is whether the Florida law restricting social media to ages 16+ is constitutional. More specifically, whether it would be evaluated as a content-based speech restriction (requiring a "strict scrutiny" burden for the gov't to prove) or a content-neutral speech restriction (requiring an "intermediate scrutiny" burden for the gov't to prove).

I'll stand with my 'lets define what we mean first' post then.
'Didactic' social media already has regulation. I don't consider it a free speech issue to say only certain groups can engage with it any more than having age limits on anyn other content for its inappropriateness for minors to evaluate in the correct context.

In terms of 'cooperative' social media, I don't necessarily see need for external restrictions, as it will largely define and then enforce its own community guidelines.


For the purpose of this specific topic, I consider TikTok to be the first (Didactic) type of media, same as instagram, given that the content creator has full control over what if any response is visible; they are their own editors, they are their own moderators, they can't claim free speech harbour when they have the power to remove dissent from their own posts.
Like Reply
#54
Id also say community notes on I-am-still-going-to-call-it-twitter are a good example of how the second 'community' type of social media can self-regulate.

Someones always going to have an axe to grind against someone else and "well, actually..." anything that can't be 100% backed up and verified.
Like Reply
#55
Well, while I think the law is probably best evaluated in the content-neutral paradigm, one of the burdens on the gov't is they have to show that their law is narrowly tailored to fit the issue. Relevant to that is defining "social media." I think that's an extremely difficult task, especially since I have a hard time believing they're including Youtube in that restriction. Can you imagine telling your 15-year-old she's not old enough yet to watch makeup tutorials on Youtube? So if Youtube doesn't count, as I'm assuming it wouldn't, I don't know how you meaningfully differentiate it from basically any other social media website up to the standard of review the court would implement.
Like Reply
#56
are the children allowed to come read the bore
Like Reply
#57
oh no

[Image: DzYKH39.png]

if cartman's post above isn't actually awaiting approval by local content governance, he could be banned  omg
Like Reply
#58
Can you imagine exposing children to Nintex fanfic
Like Reply
#59
(01-19-2024, 03:01 PM)Uncle wrote: oh no

[Image: DzYKH39.png]

if cartman's post above isn't actually awaiting approval by local content governance, he could be banned  omg

"Whilst"  Snob
Like Reply
#60
[Image: giphy.gif]
Like Reply


Forum Jump: