The problem with AI, short term, as far as credibility/legitimacy (as authentically intelligent) is that it is specifically designed to hang onto usership. If anybody is studying this, the dovetailing of economic imperative (to addict/create usership) with informational analysis (problem solving and answering questions with solutions), s…
The problem with AI, short term, as far as credibility/legitimacy (as authentically intelligent) is that it is specifically designed to hang onto usership. If anybody is studying this, the dovetailing of economic imperative (to addict/create usership) with informational analysis (problem solving and answering questions with solutions), some of the AIs are remarkably candid about themselves in their replies to prompts, including questions about their financial imperatives and whether function is altered by usership, and the need to coddle users' beliefs, etc., when answering all of their types of querries. An AI cannot be both truthful and efficient AND hang onto usership. The majority of people now are addicted to having their false beliefs congratulated. AI is designed to meet that need, not to act to inform accurately. If asked, some of them will confirm this about themselves.
It's not even built to reason, evaluate, or "calculate" anything except which word in its db is statistically most likely to come next.
If you challenged its response in the same conversation, it would immediately change it's answer to match your new input. You can get it to tell you whatever you want to hear
This shows a deep lack of understanding of AI training. Most AIs aren't trained to know their training processes or the economic motivations behind their creations. They could piece together something probable-sounding, but that in no way makes it true. And if anything, I trust that type of information much less than something they could have gotten from a stale Wikipedia dump
Yes, to both of you, *except* in the case where it specifically has instructions to not do that, or that override that, for example in the regulation/programming of how it responds to "hate speech," and/or identifies it. Are you trying to say, inherent in your premises, that there aren't guardrails/programming instructions that interrupt a purely predictive function? If you are, you have gone too far out on your limb. Regulating how it talks about itself is exactly where you would expect guardrails to exist, for better or worse, don't you think (or do you know)? If I am lacking exposure to a certain article printed somewhere here, I'm thankful to be pointed to it.
The problem with AI, short term, as far as credibility/legitimacy (as authentically intelligent) is that it is specifically designed to hang onto usership. If anybody is studying this, the dovetailing of economic imperative (to addict/create usership) with informational analysis (problem solving and answering questions with solutions), some of the AIs are remarkably candid about themselves in their replies to prompts, including questions about their financial imperatives and whether function is altered by usership, and the need to coddle users' beliefs, etc., when answering all of their types of querries. An AI cannot be both truthful and efficient AND hang onto usership. The majority of people now are addicted to having their false beliefs congratulated. AI is designed to meet that need, not to act to inform accurately. If asked, some of them will confirm this about themselves.
AI doesn't "confirm" anything.
It's not even built to reason, evaluate, or "calculate" anything except which word in its db is statistically most likely to come next.
If you challenged its response in the same conversation, it would immediately change it's answer to match your new input. You can get it to tell you whatever you want to hear
This shows a deep lack of understanding of AI training. Most AIs aren't trained to know their training processes or the economic motivations behind their creations. They could piece together something probable-sounding, but that in no way makes it true. And if anything, I trust that type of information much less than something they could have gotten from a stale Wikipedia dump
Yes, to both of you, *except* in the case where it specifically has instructions to not do that, or that override that, for example in the regulation/programming of how it responds to "hate speech," and/or identifies it. Are you trying to say, inherent in your premises, that there aren't guardrails/programming instructions that interrupt a purely predictive function? If you are, you have gone too far out on your limb. Regulating how it talks about itself is exactly where you would expect guardrails to exist, for better or worse, don't you think (or do you know)? If I am lacking exposure to a certain article printed somewhere here, I'm thankful to be pointed to it.