As always you make a compelling case to let competition reign and be wary of authoritative and officious intrusion on the market place of things and ideas.
On a side note, I don't recall Malthus attributing technology as the eventuall cause of widespread starvation. His explanation of the geometric rates of population growth leading to starvation is based on a lack of insightful prediction of the long term benefits of technology.
Note: Now that I think further I believe Malthus may have been observing an increase of human survival rates as a technological improvement that would have deleterious effects.
The only even semi-legit case for regulation of AI relates to “malevolent AGI is an existential threat to humanity”. But even aside from the very real problem that a) no one knows how to regulate this appropriately (as you cover very well), and b) those who are subject to and want to disobey our laws probably could get around any reasonable regulation, there is c) those most likely to create such an existential threat AI will not be subject to our laws.
Given c), I’d rather not have “our guys’” AI shackled. As some AI-proponent pointed out, there’s some chance AI will save the world…
You get no argument from here re AI; the threat is overblown and the issue badly defined. I wonder, though, about putting ”climate crisis” in ironic quotes — it would be great if you could adress that issue in a subsequent blog post, given also your acknowledgment of classical externality issues that eg the Clean Air Act addressed.
A breath of fresh air and cogent analysis in a sea of regressive and harmful ideas. Let’s wait for AI to unravel the genetic mysteries of how cancer grows and then let the scientists rip with their therapies before we allow the political system to impose inefficient and misguided guardrails…
We should fight to preserve our freedoms and our property rights rather than trying to regulate future innovation. The pols that the billionaire tycoons have bought are happy to let tech abuse our personal data without compensation, to train AIs and drive search algos on stolen input (violating copyright) and in return the tech bros cooperate with the pols on censoring and demonetizing opponents. The threat of tech regulation is usually just a call for bigger donations from the tech tycoons and a way to keep tech compliant with the wishes of the pols in controlling information and discourse.
Really, let’s start with a focus on rights and individuals and let innovation take its course subject to those limitations. So far we’ve failed miserably and ended up with a tech industry focused on selling adverts and personal data.
The default setting for humans, at least today, is to fear something new and respond to that fear. Too bad we not better than that and doubt anytime soon we will see a change. Those in power like to use it and look to use too often without thinking .
One thought I had: as AI becomes ever more sophisticated, what might this mean for education? Like you, I don't think democracy is in danger, so the role of education should perhaps be more geared toward ensuring people are aware of the importance of basic concepts (rule of law, freedom of speech, trade, assembly), etc. The need to "learn how to code" or other such "technical" things that AI will be able to do will perhaps become comparatively less important.
"What about jobs? ... The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives." Perhaps an even more important answer is to decrease the barriers to people doing new types of work, e.g. occupational licensing, business licensing making it difficult to start a new business, zoning harms home-based businesses, decrease employment regulations to make it easier to employ people, etc.
It has been universally true that the vast majority of people want to work (both for the satisfaction of being a contributing member of society and compensation) and other people want things done for them. The easier it is for people to find new endeavors, the better for everyone.
"AI poses a threat to democracy and society." I went back to this statement because in large part, the essay is a response to it. In particular, I wondered why is AI a threat to democracy instead of an asset or opportunity to expand? What motivation or purpose do the alarmists have for saying this?
I then read a number of articles that discuss the "threat" aspect (one in Foreign Affairs) and what they all seemed to come down to is a fear that "median voters" in swing districts/states will be able to be influenced by AI generated stories, images, or content that can be distributed quickly and inexpensively. The AI content can be delivered without "fingerprints" of who produced or distributed it.
So basically the fear is that someone can make up an even bigger lie and get it in front of a sizable population very fast without fear of consequence. In turn this will sway some portion of voters to a particular candidate (by pulling them away from another on some sort of negative attack). Basically, it assumes that voters are too stupid to know that AI or other technology-based methods for generating content may not be truthful or real.
Ironically, if the median voter is wiling to entertain crazy stories such as "Donald Trump is an alien from another galaxy that eats humans" or even something not quite as crazy (but still false) "Kamala Harris accidentally shot a dog while practicing shooting" this is where the median voter is at.
AI in these instances does not force a voter to behave a certain way. The argument is that they are manipulated in way that hampers traditional methods of persuasion (ads developed by campaigns and surrogates). Many campaigns make false promises and voters have learned to discount rhetoric. Those who argue AI will threaten democracy are really those who think they have "dialed in" how to convert "median voters" to their side and want no disruption to their ability to do so.
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a different and dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a different and dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
As always you make a compelling case to let competition reign and be wary of authoritative and officious intrusion on the market place of things and ideas.
On a side note, I don't recall Malthus attributing technology as the eventuall cause of widespread starvation. His explanation of the geometric rates of population growth leading to starvation is based on a lack of insightful prediction of the long term benefits of technology.
Note: Now that I think further I believe Malthus may have been observing an increase of human survival rates as a technological improvement that would have deleterious effects.
The only even semi-legit case for regulation of AI relates to “malevolent AGI is an existential threat to humanity”. But even aside from the very real problem that a) no one knows how to regulate this appropriately (as you cover very well), and b) those who are subject to and want to disobey our laws probably could get around any reasonable regulation, there is c) those most likely to create such an existential threat AI will not be subject to our laws.
Given c), I’d rather not have “our guys’” AI shackled. As some AI-proponent pointed out, there’s some chance AI will save the world…
You get no argument from here re AI; the threat is overblown and the issue badly defined. I wonder, though, about putting ”climate crisis” in ironic quotes — it would be great if you could adress that issue in a subsequent blog post, given also your acknowledgment of classical externality issues that eg the Clean Air Act addressed.
A breath of fresh air and cogent analysis in a sea of regressive and harmful ideas. Let’s wait for AI to unravel the genetic mysteries of how cancer grows and then let the scientists rip with their therapies before we allow the political system to impose inefficient and misguided guardrails…
We should fight to preserve our freedoms and our property rights rather than trying to regulate future innovation. The pols that the billionaire tycoons have bought are happy to let tech abuse our personal data without compensation, to train AIs and drive search algos on stolen input (violating copyright) and in return the tech bros cooperate with the pols on censoring and demonetizing opponents. The threat of tech regulation is usually just a call for bigger donations from the tech tycoons and a way to keep tech compliant with the wishes of the pols in controlling information and discourse.
Really, let’s start with a focus on rights and individuals and let innovation take its course subject to those limitations. So far we’ve failed miserably and ended up with a tech industry focused on selling adverts and personal data.
The default setting for humans, at least today, is to fear something new and respond to that fear. Too bad we not better than that and doubt anytime soon we will see a change. Those in power like to use it and look to use too often without thinking .
Excellent post!
One thought I had: as AI becomes ever more sophisticated, what might this mean for education? Like you, I don't think democracy is in danger, so the role of education should perhaps be more geared toward ensuring people are aware of the importance of basic concepts (rule of law, freedom of speech, trade, assembly), etc. The need to "learn how to code" or other such "technical" things that AI will be able to do will perhaps become comparatively less important.
"What about jobs? ... The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives." Perhaps an even more important answer is to decrease the barriers to people doing new types of work, e.g. occupational licensing, business licensing making it difficult to start a new business, zoning harms home-based businesses, decrease employment regulations to make it easier to employ people, etc.
It has been universally true that the vast majority of people want to work (both for the satisfaction of being a contributing member of society and compensation) and other people want things done for them. The easier it is for people to find new endeavors, the better for everyone.
"AI poses a threat to democracy and society." I went back to this statement because in large part, the essay is a response to it. In particular, I wondered why is AI a threat to democracy instead of an asset or opportunity to expand? What motivation or purpose do the alarmists have for saying this?
I then read a number of articles that discuss the "threat" aspect (one in Foreign Affairs) and what they all seemed to come down to is a fear that "median voters" in swing districts/states will be able to be influenced by AI generated stories, images, or content that can be distributed quickly and inexpensively. The AI content can be delivered without "fingerprints" of who produced or distributed it.
So basically the fear is that someone can make up an even bigger lie and get it in front of a sizable population very fast without fear of consequence. In turn this will sway some portion of voters to a particular candidate (by pulling them away from another on some sort of negative attack). Basically, it assumes that voters are too stupid to know that AI or other technology-based methods for generating content may not be truthful or real.
Ironically, if the median voter is wiling to entertain crazy stories such as "Donald Trump is an alien from another galaxy that eats humans" or even something not quite as crazy (but still false) "Kamala Harris accidentally shot a dog while practicing shooting" this is where the median voter is at.
AI in these instances does not force a voter to behave a certain way. The argument is that they are manipulated in way that hampers traditional methods of persuasion (ads developed by campaigns and surrogates). Many campaigns make false promises and voters have learned to discount rhetoric. Those who argue AI will threaten democracy are really those who think they have "dialed in" how to convert "median voters" to their side and want no disruption to their ability to do so.
An AI version of JC would have written a better essay (and without his neoliberal bias).
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a different and dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a different and dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
John, well done as always. There is so much to unpack here. Referencing Hayek's The Road to Serfdom; he observed the accepted collectivist central planning in Germany led to the dystopian horrors of the Nazi regime. Today, statists have a dishonest view of technology and innovation. These deep thinkers believe, without reflection, free markets and freedom are not only the way to prosperity. They believe it's a more moral system. Amiable morality if you will. Per Pierre Lemieux "A 'social welfare function' representing a set of 'social indifference curves' belongs not to 'we' but to whoever writes down, or imagines, the preferences of 'society.'" One can barely read a newspaper or listen to a politician's speech without hearing the standard "we as a society" or its derivatives. "You know, we're going to have to make some choices as a society," The liberalism of Hayek doesn’t depend on the current state of politics, but rather a political order meant to accommodate the open-ended change that comes from allowing people to pursue their own plans.
Apologies for the triple post. Sometimes, technology defeats me!
Great Article💯