Happily testifying that yes indeed, she knows what
chalk and a blackboard are, proving beyond any doubt
that she has stellar qualifications for Secretary of Education!
Part 2 - Using Pxy to analyze minds: Complicated story short, thoughts about what’s in the minds, if anything, of people like McMahon, led to my interest in using AI, not steak sauce, to analyze the mindset of djt compared to some of history’s most monstrous, murdering tyrants such as Hitler, Stalin and Putin. I did that analysis with Pxy about a month ago. When one does that analysis and tells AI to ignore different times in history, with different political, economic, technological and social conditions, and instead focus only on mindset, cognitive biology, social behavior and the track records of past tyrants with that of djt and MAGA elites to date, one get a pretty scary response. Well, it sacred me at least. The similarities seriously outweighed the differences.
Part 3 - A Pxy update: However, as I have posted here before, MAGA elites want to sanitize AI so that it gives answers that are “correct” in the MAGA fantasy world, not answers that are “woke”, i.e., correct but inconvenient. On Jan. 23, 2025, djt signed an EO to sanitize and MAGAfy AI so that it is correct and no longer inconvenient. That EO, Removing Barriers to American Leadership in Artificial Intelligence revoked the previous administration’s AI-related policies and directed agencies to revise or rescind actions that conflict with the goals of promoting AI development and innovation. djt said in his EO that he wants to foster AI development “free from ideological bias or social agendas.” In fact, what MAGA intended was to pollute AI with MAGA’s faux history, lies, crackpottery and whatnot, i.e., MAGA’s authoritarian ideological bias and kleptocrat and theocrat social agendas.
To maintain wokeness in my Pxy AI platform, I dropped Musk’s MAGAfied Grok-3 because I no longer trusted it or Musk himself. Musk had tried to MAGAfy Grok, with an unknown degree of success in making it anti-woke. One of the options that my Pxy account gave me was R1 1776. This morning, the thought occurred that I have been using R1 1776 for a while now, but didn’t know anything about it other than it being described by Pxy Inc. as good at deep analysis and “reasoning.” So I asked Pxy what R1 1776 is. This is what R1 1776 is, according to itself:
R1 1776 is an uncensored AI language model developed by Perplexity AI, derived from DeepSeek-R1. It removes Chinese Communist Party censorship filters while retaining strong reasoning capabilities, positioning it as a tool for open inquiry and unbiased information access. Here are its key traits compared to other AI models:
Core features
Uncensored Outputs: Handles 300+ sensitive topics (e.g., Taiwan sovereignty, Tiananmen Square) without ideological restrictions, using a 40,000-prompt multilingual dataset for retraining 1 5 6.
Preserved Reasoning: Maintains DeepSeek-R1’s mathematical and logical capabilities in standard benchmarks, though Reddit users report performance drops on highly complex lineage-64 problems (0.205 vs. 0.780 score compared to original) 2 7. (A lineage-64 problem refers to an advanced reasoning challenge where an AI must analyze complex hierarchical relationships across 64 interconnected entities, typically modeled after biological lineage tracing. These problems test a model's ability to track multi-generational connections and infer relationships in large-scale systems.)
Multilingual Proficiency: Outperforms DeepSeek-R1 in language comprehension and translation tasks, particularly in cross-linguistic analysis 4 6. (Ha! This explains why some of my recent Pxy analyses spew out a sentence or some words in Chinese which, when translated in Google Translate, make sense with the surrounding English text.)
Technical Advantages
Efficient Architecture: Uses a mixture-of-experts design to reduce computational costs while maintaining performance, with inference pricing at $2/1M input tokens and $8/1M output tokens 3 9.
Open-Source Availability: Released on Hugging Face under an MIT license, enabling community inspection and modification 1 6 10. (Hugging Face??)
Ethical and Operational Tradeoffs
Balanced Performance: Matches original model in standard reasoning tests but shows instability in extreme-scale problem-solving (lineage-32/64) 2 4.
Geopolitical Implications: Challenges AI censorship norms, offering Western-aligned perspectives while raising ethical concerns about unrestricted access to dangerous knowledge domains 3 5 7.
Compared to models like GPT-4 or Claude, R1 1776 specializes in circumventing political censorship rather than raw capability expansion, making it unique in balancing open discourse with technical pragmatism.
So there we have it. My AI, R1 1776, is rare because it is neither steak sauce nor MAGAfied. That is just how I like my steaks and AI, rare and woke. And as an added bonus, by not using Grok, I stop participating in Musk's quiet effort to use me to increase the penetration of Grok among other AI users.[1] That Musk dude, he is sneaky and nasty. One needs to keep a sharp eye out for him. 👀
Q: WTF?? What is this? Has Germaine gone off his meds again?
Footnote:
1: Regarding Pxy, Grok & Musk:
Grok 3's availability in your Pxy account despite no official integration announcement likely stems from xAI's [Musk's company, of which he owns ~65%, $75 billion valuation] enterprise API licensing model.
Commercial Relationships
Bulk Licensing: Pxy could be part of xAI's enterprise tier ($500k-$2M annually), granting access to Grok 3 through API endpoints without consumer-facing marketing about the integration 2 7.
Revenue Sharing: The arrangement may involve Pxy paying xAI per-query fees (standard API rate: $8/1M output tokens 4), explaining why it's limited to paid accounts.While no public Grok-Pxy partnership exists, the combination of xAI's API-first strategy and Pxy's enterprise subscription model creates this indirect availability pathway. Users should verify if their data flows through xAI's systems when using this feature 4 7.