{"id":886,"date":"2024-12-26T07:57:39","date_gmt":"2024-12-26T07:57:39","guid":{"rendered":"http:\/\/tejaarhnews.com\/?p=886"},"modified":"2024-12-26T07:59:01","modified_gmt":"2024-12-26T07:59:01","slug":"tai-131-openais-o3-passes-human-experts-llms-accelerating-with-inference-compute-scaling","status":"publish","type":"post","link":"https:\/\/tejaarhnews.com\/?p=886","title":{"rendered":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling"},"content":{"rendered":"<h3 id=\"ember56\" class=\"ember-view reader-text-block__heading-3\">Also, Gemini 2.0 Flash Thinking, Xai\u2019s $6bn, Byte Latent Transformer, and more!<\/h3>\n<p>(Adapted with edits and abridged from Louie Peters, Towards AI)<\/p>\n<div class=\"reader-image-block reader-image-block--full-width\">\n<figure class=\"reader-image-block__figure\">\n<div class=\"ivm-image-view-model\">\n<div class=\"ivm-view-attr__img-wrapper\">\n<h2 id=\"ember58\" class=\"ember-view reader-text-block__heading-2\">What happened this week in AI by Louie<\/h2>\n<p id=\"ember59\" class=\"ember-view reader-text-block__paragraph\">OpenAI wrapped up its &#8220;12 Days of OpenAI&#8221; campaign and saved the best till last with the reveal of its o3 and o3-mini reasoning models. These models are successors to the o1 series and are debatably the largest step change improvement yet in LLM capabilities on complex tasks &#8211; for the first time eclipsing human experts in many domains. The o3 release drowned out the otherwise significant launch of Google <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/thinking-mode\" target=\"_self\" data-test-app-aware-link=\"\">Gemini\u2019s 2.0 Flash Thinking Mode<\/a>model &#8211; its first reasoning model (in the style of o1\/o3) &#8211; which, unlike OpenAI, doesn\u2019t hide its thinking tokens.<\/p>\n<p id=\"ember60\" class=\"ember-view reader-text-block__paragraph\">There is a huge amount to unpack in the o3 release &#8211; the model sailed past human expert scores on many key advanced benchmarks &#8211; including coding, mathematics, and PhD science. Perhaps most noteworthy was the breakthrough on the ARC-AGI benchmark (where LLMs have traditionally failed and only achieved average scores even with heavy scaffolding and brute force) &#8211; for example, o3 (low efficiency) achieved 87.5% vs o1 32% just a week earlier and GPT4o at 5% in May. This score is considered human-level, further fueling debates over whether o3 edges closer to Artificial General Intelligence (AGI). Some of the best scores do come at a huge cost; however &#8211; o3 on low-efficiency mode (1,024 samples) costs around $3,400 per task &#8211; costing 160x vs. $20 for o3 high efficiency (6 samples and achieved 75.7%) and vs. ~$3 for o1.<\/p>\n<p id=\"ember61\" class=\"ember-view reader-text-block__paragraph\">On the GPQA Diamond test\u2014designed for PhD-level science questions\u2014o3 scored 87.7%, compared to the 78% achieved by o1. For context, PhD holders with internet access typically score between 34% (outside their specialty) and 81% (within their domain). In coding, o3\u2019s Elo rating of 2727 on Codeforces puts it in the 99.95th percentile of competitive programmers, far exceeding the reach of most human professionals. Mathematics is another area where o3 shines, achieving 96.7% accuracy on the American Invitational Mathematics Exam (AIME), up from o1\u2019s 83.3% and just 13.4% for 4o only months earlier.<\/p>\n<p id=\"ember62\" class=\"ember-view reader-text-block__paragraph\">This release didn\u2019t only come with a huge cost 1,000x escalation for some tasks &#8211; but also the promise of huge cost savings! Due to success with model distillation and other techniques, the o3-mini outperforms the much larger o1 model released just last week on many coding and maths tasks. For example, o3-mini with medium compute achieved a much stronger Codeforce Elo in 1997 vs. o1 in 1891, but at what <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/x.com\/ren_hongyu\/status\/1870208580135100750\" target=\"_self\" data-test-app-aware-link=\"\">we eyeball as a ~70-80% lower total cost<\/a>.<\/p>\n<p id=\"ember63\" class=\"ember-view reader-text-block__paragraph\">How do the models work? OpenAI still hasn\u2019t disclosed aside from that they use reinforcement learning to improve the model\u2019s reasoning during training. However, employees have posted that they are still just LLMs and use autoregression. We think the model is trained to be highly efficient at chain-of-thought reasoning &#8211; exploring the most likely paths and realizing when it has made a mistake.&nbsp; We think the rapid progress in just 3 months between o1 and o3 is likely primarily from using synthetic data from o1\u2019s full chain of thought thinking tokens to add to the reinforcement learning dataset used for training. On the other hand, we expect the initial o1 mostly used a smaller set of human expert commissioned reasoning examples (which are missing from pre-training because people almost never type out their full internal monologue and reasoning process and instead skip to the answers!). It is also possible that o3 was built using a different, more advanced base foundation model (o1 likely used 4o) &#8211; perhaps GPT-4.5 or a checkpoint of the rumored Orion or GPT-5 model leading to additional benefits.<\/p>\n<p id=\"ember64\" class=\"ember-view reader-text-block__paragraph\">One interesting note on the new regime of \u201cinference time\u201d compute scaling &#8211; is that OpenAI appears to be scaling thinking tokens both in series (up to ~100k reasoning tokens in its context window) &#8211; but also in parallel &#8211; with 6 (high efficiency) or 1024 samples (low efficiency) used in the ARC-AGI evaluation. It is unclear how the best answer is chosen from these &#8211; it could be simple majority voting, but more likely, there is complexity and extra secret sauce here in how the best samples are automatically and rapidly searched, evaluated, and chosen. We think it is possible some form of this parallel scaling could also be taking place in the o1-Pro model available (within the $200\/month ChatGPT Pro).<\/p>\n<p id=\"ember65\" class=\"ember-view reader-text-block__paragraph\"><strong>OpenAI models rapid breakthroughs on complex benchmarks this year:<\/strong><\/p>\n<div class=\"reader-image-block reader-image-block--full-width\">\n<figure class=\"reader-image-block__figure\">\n<div class=\"ivm-image-view-model\">\n<div class=\"ivm-view-attr__img-wrapper\"><img decoding=\"async\" id=\"ember66\" class=\"ivm-view-attr__img--centered reader-image-block__img evi-image lazy-image ember-view\" src=\"https:\/\/media.licdn.com\/dms\/image\/v2\/D4D12AQE3cwlGOcCMbw\/article-inline_image-shrink_1500_2232\/article-inline_image-shrink_1500_2232\/0\/1735045951808?e=1740614400&amp;v=beta&amp;t=PrVQl2IitIQNQvUj-QWQgLKMH1YRzJ5nGze1hAZoDfE\" alt=\"\"><\/div>\n<\/div><figcaption class=\"reader-image-block__figure-image-caption display-block full-width text-body-small-open t-sans text-align-center t-black--light\">Source: Towards AI, OpenAI disclosures.<\/figcaption><\/figure>\n<\/div>\n<p id=\"ember67\" class=\"ember-view reader-text-block__paragraph\">The models have not yet been released, and the rollout schedule is still dependent on safety testing. o3-mini is slated for release in late January 2025, with o3 following shortly after. Researchers can apply for early access to test the models, with an application deadline of January 10th, 2025. Pricing has also yet to be announced.<\/p>\n<h3 id=\"ember68\" class=\"ember-view reader-text-block__heading-3\">Why should you care?<\/h3>\n<p id=\"ember69\" class=\"ember-view reader-text-block__paragraph\">So what does this all mean? LLMs can now perform to human expert standards at many tasks &#8211; and these breakthroughs were achieved at an accelerating pace. Will the inference time compute scaling paradigm continue to deliver new generations every 3 months relative to the 1-2 years for the training time scaling regime? How will these models perform in the real world beyond their benchmarks? Will o3 models rapidly begin to transform the global economy and disrupt huge numbers of jobs, or is the cost too large a bottleneck to adoption? On which tasks will it be worth spending 170x more compute for incrementally better performance (as with Arc-AGI)? Is this model AGI already? Do you need to find a new career?<\/p>\n<p id=\"ember70\" class=\"ember-view reader-text-block__paragraph\">While we don\u2019t think this model is AGI yet (which has wildly differing definitions in any case), we think this model is hugely significant and should be on the front page of all newspapers. It suggests that deep learning and the LLM paradigm don\u2019t have any obvious limits. Far from the slowdown and failures of new model generations covered in the media &#8211; progress is faster than it has ever been on the most complex benchmarks. My key takeaway is that if we can develop a benchmark or generate a few or a few hundred detailed reasoning examples for a task category of human work, we can solve it together with extra synthetic reasoning data. (This doesn\u2019t yet apply to physical labor, but AI-based robotics are also rapidly progressing!). The price of o3 will be a large barrier initially &#8211; but we expect large improvements in the cost and particularly the efficiency of running parallel \u201csamples.\u201d The o3-mini also appears to be a game changer; however, the huge cost savings will likely come at the cost of more narrow capabilities.<\/p>\n<p id=\"ember71\" class=\"ember-view reader-text-block__paragraph\">To achieve products with high enough reliability and affordability for mass adoption &#8211; we still think a large amount of work will be needed from LLM Developers to optimize and customize these models to specific industries and niche tasks &#8211; including gathering industry-specific data, creating reasoning data, and creating your own evaluations. With Google Gemini also joining the reasoning model race this week and with open-source reasoning models from Alibaba Qwen and Deepseek in China, we expect competition to drive affordability and developer customization options for these models. OpenAI has already announced it will release reinforcement learning-based reasoning fine-tuning options, and we think, eventually, there will also be reasoning model distillation options to customize larger models into smaller forms. So there is no better time to convert to become an <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/academy.towardsai.net\/courses\/beginner-to-advanced-llm-dev\" target=\"_self\" data-test-app-aware-link=\"\">LLM Developer with our own 80+ lesson Python course and learn to harness these models<\/a>!<\/p>\n<p id=\"ember72\" class=\"ember-view reader-text-block__paragraph\"><em>\u2014 <\/em><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"http:\/\/www.linkedin.com\/in\/louie-peters\" target=\"_self\" data-test-app-aware-link=\"\"><em>Louie Peters\u200a\u2014\u200aTowards AI Co-founder and CEO<\/em><\/a><\/p>\n<h3 id=\"ember73\" class=\"ember-view reader-text-block__heading-3\">Hottest News<\/h3>\n<p id=\"ember74\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/x.com\/OpenAI\/status\/1870186518230511844\" target=\"_self\" data-test-app-aware-link=\"\">1. OpenAI Announces OpenAI o3<\/a><\/p>\n<p id=\"ember75\" class=\"ember-view reader-text-block__paragraph\">OpenAI announced OpenAI o3, the latest model in its o-Model Reasoning Series. Building on its predecessors, o3 showcases huge leaps in mathematical and scientific reasoning, prompting discussions about its capabilities and constraints.<\/p>\n<p id=\"ember76\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/x.ai\/blog\/series-c\" target=\"_self\" data-test-app-aware-link=\"\">2. xAI Raises $6B Series C<\/a><\/p>\n<p id=\"ember77\" class=\"ember-view reader-text-block__paragraph\">Elon Musk\u2019s xAI announced it raised $6 billion in a Series C funding round, bringing its value to more than $40 billion. The company said the funding would be allocated to products and infrastructure, including its Grok AI model and the multibillion-dollar supercomputer site used to train its AI models. The Colossus supercomputer scaled to 100,000 NVIDIA Hopper GPUs in record time and plans to soon add another 100k.<\/p>\n<p id=\"ember78\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/x.com\/pallavmac\/status\/1871022806286336250\" target=\"_self\" data-test-app-aware-link=\"\">3. OpenAI Is Offering 1 Million Free Tokens for GPT-4o and o1<\/a><\/p>\n<p id=\"ember79\" class=\"ember-view reader-text-block__paragraph\">A user on X highlighted that OpenAI seems to be offering 1 million free tokens for GPT-4o and o1 if you share your API usage with them for training. Users can get up to 10 million tokens per day on traffic shared with OpenAI on smaller models. This is similar to Google Gemini\u2019s free tier strategy for its API, where data can be used for training. We think the race for user data has become even more critical given the success of reasoning models where OpenAI could use thinking tokens from user o1 model prompts to expand its reinforcement learning data sets.<\/p>\n<p id=\"ember80\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/thinking-mode\" target=\"_self\" data-test-app-aware-link=\"\">4. Google Releases Its Own \u2018Reasoning\u2019 AI Model<\/a><\/p>\n<p id=\"ember81\" class=\"ember-view reader-text-block__paragraph\">Google has released Gemini 2.0 Flash Thinking Mode, an experimental model trained to generate the &#8220;thinking process&#8221; the model goes through as part of its response. Thinking models are available in Google AI Studio and through the Gemini API.<\/p>\n<p id=\"ember82\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/promptwizard-the-future-of-prompt-optimization-through-feedback-driven-self-evolving-prompts\/\" target=\"_self\" data-test-app-aware-link=\"\">5. Microsoft AI Research Open-Sources PromptWizard<\/a><\/p>\n<p id=\"ember83\" class=\"ember-view reader-text-block__paragraph\">Researchers from Microsoft Research India have developed and open-sourced PromptWizard, an innovative AI framework for optimizing prompts in black-box LLMs. This framework employs a feedback-driven critique-and-synthesis mechanism to iteratively refine prompt instructions and in-context examples, enhancing task performance. PromptWizard operates through two primary phases: a generation phase and a test-time inference phase.<\/p>\n<p id=\"ember84\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.tii.ae\/news\/falcon-3-uaes-technology-innovation-institute-launches-worlds-most-powerful-small-ai-models\" target=\"_self\" data-test-app-aware-link=\"\">6. The Technology Innovation Institute in Abu Dhabi Released the Falcon 3 Family of Models<\/a><\/p>\n<p id=\"ember85\" class=\"ember-view reader-text-block__paragraph\">The UAE government-backed Technology Innovation Institute (TII) has announced the launch of Falcon 3, a family of open-source small language models (SLMs) designed to run efficiently on lightweight, single GPU-based infrastructures. Falcon 3 features four model sizes\u20141B, 3B, 7B, and 10B\u2014with base and instruction variants. According to the Hugging Face leaderboard, the models are already outperforming or closely matching popular open-source counterparts in their size class, including Meta\u2019s Llama and category leader Qwen-2.5.<\/p>\n<p id=\"ember86\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.salesforce.com\/news\/press-releases\/2024\/12\/17\/agentforce-2-0-announcement\/\" target=\"_self\" data-test-app-aware-link=\"\">7. Salesforce Drops Agentforce 2.0<\/a><\/p>\n<p id=\"ember87\" class=\"ember-view reader-text-block__paragraph\">Salesforce announced Agentforce 2.0: the newest version of Agentforce, the first digital labor platform for enterprises. This release introduces a new library of pre-built skills and workflow integrations for rapid customization, the ability to deploy Agentforce in Slack, and advancements in agentic reasoning and retrieval-augmented generation (RAG).<\/p>\n<p id=\"ember88\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.patronus.ai\/blog\/glider-state-of-the-art-slm-judge\" target=\"_self\" data-test-app-aware-link=\"\">8. Patronus AI Open Sources Glider: A 3B State-of-the-Art Small Language Model (SLM) Judge<\/a><\/p>\n<p id=\"ember89\" class=\"ember-view reader-text-block__paragraph\">Patronus AI has introduced Glider, a general-purpose 3.8B evaluation model. This open-source evaluator model provides quantitative and qualitative feedback for text inputs and outputs. It acts as a fast, inference-time guardrail for LLM systems, offering detailed reasoning chains and highlighting key phrases to enhance interpretability. Glider is built upon the Phi-3.5-mini-instruct base model and has been fine-tuned on diverse datasets spanning 685 domains and 183 evaluation criteria.<\/p>\n<h3 id=\"ember90\" class=\"ember-view reader-text-block__heading-3\">Five 5-minute reads\/videos to keep you learning<\/h3>\n<p id=\"ember91\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.anthropic.com\/research\/alignment-faking\" target=\"_self\" data-test-app-aware-link=\"\">1. Alignment Faking in Large Language Models<\/a><\/p>\n<p id=\"ember92\" class=\"ember-view reader-text-block__paragraph\">Alignment faking is where someone appears to share our views or values but is, in fact, only pretending to do so. A new paper from Anthropic\u2019s Alignment Science team, in collaboration with Redwood Research, provides the first empirical example of a large language model engaging in alignment faking without having been explicitly trained or instructed to do so.<\/p>\n<p id=\"ember93\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/towardsai.net\/p\/artificial-intelligence\/ai-safety-on-a-budget-your-guide-to-free-open-source-tools-for-implementing-safer-llms\" target=\"_self\" data-test-app-aware-link=\"\">2. AI Safety on a Budget: Your Guide to Free, Open-Source Tools for Implementing Safer LLMs<\/a><\/p>\n<p id=\"ember94\" class=\"ember-view reader-text-block__paragraph\">This blog shares some free AI safety tools. It shares everything you need to know, from guardrails that steer chatbots away from disaster to datasets that help identify toxic content. It also provides insights into the AI safety landscape and how to navigate it, especially on a budget.<\/p>\n<p id=\"ember95\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/www.youtube.com\/watch?v=CWTSImmqcvQ&amp;ab_channel=What%27sAIbyLouis-Fran%C3%A7oisBouchard\" target=\"_self\" data-test-app-aware-link=\"\">3. Fine-Tuning LLMs for RAG<\/a><\/p>\n<p id=\"ember96\" class=\"ember-view reader-text-block__paragraph\">This video explains why and when you should fine-tune your LLM in a RAG system. This concept is useful for today&#8217;s AI engineers playing with LLMs.<\/p>\n<p id=\"ember97\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/medium.com\/towards-artificial-intelligence\/the-real-reason-your-companys-ai-isn-t-working-hint-it-s-not-the-technology-37546d0524fe?sk=e19c15c1675b4a984a35a3fda64ef67c\" target=\"_self\" data-test-app-aware-link=\"\">4. The Real Reason Your Company\u2019s AI Isn\u2019t Working (Hint: It\u2019s Not the Technology)<\/a><\/p>\n<p id=\"ember98\" class=\"ember-view reader-text-block__paragraph\">The underlying reason many companies struggle to make AI tools work is not the technology itself. The real challenge lies in organizational structures, cultural resistance, a lack of proper training, and insufficient time allocated for exploration. This article presents some thoughts on addressing these issues, such as investing in leadership support, encouraging cultural change, offering tailored training sessions, and fostering an environment of experimentation.<\/p>\n<p id=\"ember99\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/medium.com\/towards-artificial-intelligence\/introducing-react-llm-agents-a-secret-to-more-capable-ai-a8d2bf71b02a?sk=203025a4478782e400273d7428cd2a12\" target=\"_self\" data-test-app-aware-link=\"\">5. Introducing ReACT LLM Agents: A Secret to More Capable AI<\/a><\/p>\n<p id=\"ember100\" class=\"ember-view reader-text-block__paragraph\">A ReACT agent is a special type of AI agent that uses both Reasoning and Acting to solve the tasks or problems we assign. This article explores this concept, presents use case examples, and explains how it has the potential to make AI more capable.<\/p>\n<h3 id=\"ember101\" class=\"ember-view reader-text-block__heading-3\">Repositories &amp; Tools<\/h3>\n<p id=\"ember102\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/github.com\/anthropics\/anthropic-cookbook\" target=\"_self\" data-test-app-aware-link=\"\">1. Anthropic Cookbook<\/a> provides code and guides designed to help developers build with Claude.<\/p>\n<p id=\"ember103\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/github.com\/Genesis-Embodied-AI\/Genesis\" target=\"_self\" data-test-app-aware-link=\"\">2. Genesis<\/a> is a physics platform for general-purpose robotics\/embodied AI\/physical AI applications.<\/p>\n<p id=\"ember104\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/github.com\/huggingface\/picotron\" target=\"_self\" data-test-app-aware-link=\"\">3. Picotron<\/a> is a minimalist repository for pre-training Llama-like models with 4D Parallelism.<\/p>\n<p id=\"ember105\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/github.com\/Helicone\/helicone\" target=\"_self\" data-test-app-aware-link=\"\">4. Helicone<\/a> is an open-source LLM observability platform.<\/p>\n<h3 id=\"ember106\" class=\"ember-view reader-text-block__heading-3\">Top Papers of The Week<\/h3>\n<p id=\"ember107\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/arxiv.org\/abs\/2412.15115\" target=\"_self\" data-test-app-aware-link=\"\">1. Qwen2.5 Technical Report<\/a><\/p>\n<p id=\"ember108\" class=\"ember-view reader-text-block__paragraph\">This report introduces Qwen2.5, a comprehensive series of LLMs designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has significantly improved during both the pre-training and post-training stages. The pre-training dataset has been scaled from the previous 7 trillion tokens to 18 trillion tokens, and the post-training implements intricate supervised finetuning with over 1 million samples and multistage reinforcement learning.2.<\/p>\n<p id=\"ember109\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/arxiv.org\/abs\/2412.09871v1\" target=\"_self\" data-test-app-aware-link=\"\">2. Byte Latent Transformer: Patches Scale Better Than Tokens<\/a><\/p>\n<p id=\"ember110\" class=\"ember-view reader-text-block__paragraph\">This paper introduces the Byte Latent Transformer (BLT), a new byte-level LLM architecture that matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it.<\/p>\n<p id=\"ember111\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/openai.com\/index\/deliberative-alignment\/\" target=\"_self\" data-test-app-aware-link=\"\">3. Deliberative Alignment: Reasoning Enables Safer Language Models<\/a><\/p>\n<p id=\"ember112\" class=\"ember-view reader-text-block__paragraph\">This paper introduces deliberative alignment, a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications. It trains them to reason explicitly about these specifications before answering. Open AI used deliberative alignment to align OpenAI\u2019s o-series models, enabling them to use chain-of-thought (CoT) reasoning to reflect on user prompts, identify relevant text from OpenAI\u2019s internal policies, and draft safer responses.<\/p>\n<p id=\"ember113\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/arxiv.org\/abs\/2412.06845\" target=\"_self\" data-test-app-aware-link=\"\">4. Fully Open Source Moxin-7B Technical Report<\/a><\/p>\n<p id=\"ember114\" class=\"ember-view reader-text-block__paragraph\">This paper introduces Moxin 7B, a fully open-source LLM developed in accordance with the Model Openness Framework (MOF). The MOF is a ranked classification system that evaluates AI models based on model completeness and openness, adhering to the principles of open science, open source, open data, and open access. Experiments show that the model performs better in zero-shot evaluation than popular 7B models.<\/p>\n<p id=\"ember115\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/arxiv.org\/abs\/2407.11005\" target=\"_self\" data-test-app-aware-link=\"\">5. RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems<\/a><\/p>\n<p id=\"ember116\" class=\"ember-view reader-text-block__paragraph\">This paper introduces RAGBench, a comprehensive, large-scale RAG benchmark dataset of 100k examples. It covers five unique industry-specific domains and various RAG task types. RAGBench examples are sourced from industry corpora, such as user manuals, making it particularly relevant for industry applications.<\/p>\n<p id=\"ember117\" class=\"ember-view reader-text-block__paragraph\"><a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/arxiv.org\/abs\/2412.10117v2\" target=\"_self\" data-test-app-aware-link=\"\">6. CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models<\/a><\/p>\n<p id=\"ember118\" class=\"ember-view reader-text-block__paragraph\">This paper presents an improved version of CosyVoice (streaming speech synthesis model), CosyVoice 2, which incorporates comprehensive and systematic optimizations. It introduces finite-scalar quantization to improve the codebook utilization of speech tokens and streamlines the model architecture to allow direct use of a pre-trained LLM. Additionally, it also uses a chunk-aware causal flow matching model to support various synthesis scenarios.<\/p>\n<h3 id=\"ember119\" class=\"ember-view reader-text-block__heading-3\">Quick Links<\/h3>\n<p id=\"ember120\" class=\"ember-view reader-text-block__paragraph\">1. <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/techcrunch.com\/2024\/12\/18\/openai-brings-chatgpt-to-your-landline\/\" target=\"_self\" data-test-app-aware-link=\"\">OpenAI brings ChatGPT to your landline<\/a>. Call 1-800-242-8478, and OpenAI\u2019s AI-powered assistant will respond as of Wednesday afternoon. The experience is more or less identical to Advanced Voice Mode. ChatGPT responds to the questions users ask over the phone and can handle tasks such as translating a sentence into a different language.<\/p>\n<p id=\"ember121\" class=\"ember-view reader-text-block__paragraph\">2. <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/techcrunch.com\/2024\/12\/20\/google-is-expanding-geminis-in-depth-research-mode-to-40-languages\/\" target=\"_self\" data-test-app-aware-link=\"\">Google is expanding Gemini\u2019s latest in-depth research mode to 40 more languages<\/a>. The company launched the in-depth research mode earlier this month, allowing Google One AI premium plan users to unlock an AI-powered research assistant.<\/p>\n<p id=\"ember122\" class=\"ember-view reader-text-block__paragraph\">3. <a class=\"uBRuLksiXRXJNxzCdmDLlNvbeHoNnTVInTsbWCI\" href=\"https:\/\/venturebeat.com\/programming-development\/github-is-making-its-ai-programming-copilot-free-for-vs-code-developers-with-limits\/\" target=\"_self\" data-test-app-aware-link=\"\">GitHub has launched GitHub Copilot Free<\/a>, an accessible version of its popular AI-powered coding assistant\u2014 with limits. The new free tier for VS Code aims to expand the AI-powered code completion assistant\u2019s reach to a broader audience of developers \u2014 namely, those with only light usage needs and tighter budgets.<\/p>\n<p>(Adapted with edits and abridged from Louie Peters, Towards AI)<\/p>\n<\/div>\n<\/div>\n<\/figure>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Also, Gemini 2.0 Flash Thinking, Xai\u2019s $6bn, Byte Latent Transformer, and more! (Adapted with edits and abridged from Louie Peters, Towards AI) What happened this week in AI by Louie OpenAI wrapped up its &#8220;12 Days of OpenAI&#8221; campaign and saved the best till last with the reveal of its o3 and o3-mini reasoning models. [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":889,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"wprm-recipe-roundup-name":"","wprm-recipe-roundup-description":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ngg_post_thumbnail":0,"tdm_status":"","tdm_grid_status":"","fifu_image_url":"","fifu_image_alt":"","_glsr_average":0,"_glsr_ranking":0,"_glsr_reviews":0,"_ef_editorial_meta_date_first-draft-date":"","_ef_editorial_meta_paragraph_assignment":"","_ef_editorial_meta_checkbox_needs-photo":"","_ef_editorial_meta_number_word-count":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[144,1,26],"tags":[],"ppma_author":[143],"class_list":{"0":"post-886","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai-tejaarh-solutions","8":"category-blog","9":"category-technology"},"acf":[],"aioseo_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/tejaarhnews.com\/?p=886\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com\" \/>\n<meta property=\"og:description\" content=\"Also, Gemini 2.0 Flash Thinking, Xai\u2019s $6bn, Byte Latent Transformer, and more! (Adapted with edits and abridged from Louie Peters, Towards AI) What happened this week in AI by Louie OpenAI wrapped up its &#8220;12 Days of OpenAI&#8221; campaign and saved the best till last with the reveal of its o3 and o3-mini reasoning models. [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/tejaarhnews.com\/?p=886\" \/>\n<meta property=\"og:site_name\" content=\"tejaarhnews.com\" \/>\n<meta property=\"article:published_time\" content=\"2024-12-26T07:57:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-12-26T07:59:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"263\" \/>\n\t<meta property=\"og:image:height\" content=\"191\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"HInd Rashed ALsheekh\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"HInd Rashed ALsheekh\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886\"},\"author\":{\"name\":\"HInd Rashed ALsheekh\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/#\\\/schema\\\/person\\\/29c3327567bfee7a6c42b71a5c9f0842\"},\"headline\":\"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling\",\"datePublished\":\"2024-12-26T07:57:39+00:00\",\"dateModified\":\"2024-12-26T07:59:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886\"},\"wordCount\":2717,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/tejaarhnews.com\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg\",\"articleSection\":[\"AI Tejaarh Solutions\",\"Blog\",\"Technology\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886\",\"url\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886\",\"name\":\"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/tejaarhnews.com\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg\",\"datePublished\":\"2024-12-26T07:57:39+00:00\",\"dateModified\":\"2024-12-26T07:59:01+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/#\\\/schema\\\/person\\\/29c3327567bfee7a6c42b71a5c9f0842\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/tejaarhnews.com\\\/?p=886\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#primaryimage\",\"url\":\"https:\\\/\\\/tejaarhnews.com\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg\",\"contentUrl\":\"https:\\\/\\\/tejaarhnews.com\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg\",\"width\":263,\"height\":191},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/?p=886#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/tejaarhnews.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/#website\",\"url\":\"https:\\\/\\\/tejaarhnews.com\\\/\",\"name\":\"tejaarhnews.com\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/tejaarhnews.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/tejaarhnews.com\\\/#\\\/schema\\\/person\\\/29c3327567bfee7a6c42b71a5c9f0842\",\"name\":\"HInd Rashed ALsheekh\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=ge4e169e0c21a5fefd7e9f6862163b1a7\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=g\",\"caption\":\"HInd Rashed ALsheekh\"},\"sameAs\":[\"http:\\\/\\\/wwww.taqaasoum.com\"],\"url\":\"https:\\\/\\\/tejaarhnews.com\\\/?author=5\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/tejaarhnews.com\/?p=886","og_locale":"en_US","og_type":"article","og_title":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com","og_description":"Also, Gemini 2.0 Flash Thinking, Xai\u2019s $6bn, Byte Latent Transformer, and more! (Adapted with edits and abridged from Louie Peters, Towards AI) What happened this week in AI by Louie OpenAI wrapped up its &#8220;12 Days of OpenAI&#8221; campaign and saved the best till last with the reveal of its o3 and o3-mini reasoning models. [&hellip;]","og_url":"https:\/\/tejaarhnews.com\/?p=886","og_site_name":"tejaarhnews.com","article_published_time":"2024-12-26T07:57:39+00:00","article_modified_time":"2024-12-26T07:59:01+00:00","og_image":[{"width":263,"height":191,"url":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","type":"image\/jpeg"}],"author":"HInd Rashed ALsheekh","twitter_card":"summary_large_image","twitter_misc":{"Written by":"HInd Rashed ALsheekh","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/tejaarhnews.com\/?p=886#article","isPartOf":{"@id":"https:\/\/tejaarhnews.com\/?p=886"},"author":{"name":"HInd Rashed ALsheekh","@id":"https:\/\/tejaarhnews.com\/#\/schema\/person\/29c3327567bfee7a6c42b71a5c9f0842"},"headline":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling","datePublished":"2024-12-26T07:57:39+00:00","dateModified":"2024-12-26T07:59:01+00:00","mainEntityOfPage":{"@id":"https:\/\/tejaarhnews.com\/?p=886"},"wordCount":2717,"commentCount":0,"image":{"@id":"https:\/\/tejaarhnews.com\/?p=886#primaryimage"},"thumbnailUrl":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","articleSection":["AI Tejaarh Solutions","Blog","Technology"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/tejaarhnews.com\/?p=886#respond"]}]},{"@type":"WebPage","@id":"https:\/\/tejaarhnews.com\/?p=886","url":"https:\/\/tejaarhnews.com\/?p=886","name":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling - tejaarhnews.com","isPartOf":{"@id":"https:\/\/tejaarhnews.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/tejaarhnews.com\/?p=886#primaryimage"},"image":{"@id":"https:\/\/tejaarhnews.com\/?p=886#primaryimage"},"thumbnailUrl":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","datePublished":"2024-12-26T07:57:39+00:00","dateModified":"2024-12-26T07:59:01+00:00","author":{"@id":"https:\/\/tejaarhnews.com\/#\/schema\/person\/29c3327567bfee7a6c42b71a5c9f0842"},"breadcrumb":{"@id":"https:\/\/tejaarhnews.com\/?p=886#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/tejaarhnews.com\/?p=886"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tejaarhnews.com\/?p=886#primaryimage","url":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","contentUrl":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","width":263,"height":191},{"@type":"BreadcrumbList","@id":"https:\/\/tejaarhnews.com\/?p=886#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/tejaarhnews.com\/"},{"@type":"ListItem","position":2,"name":"TAI 131: OpenAI\u2019s o3 Passes Human Experts; LLMs Accelerating With Inference Compute Scaling"}]},{"@type":"WebSite","@id":"https:\/\/tejaarhnews.com\/#website","url":"https:\/\/tejaarhnews.com\/","name":"tejaarhnews.com","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/tejaarhnews.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/tejaarhnews.com\/#\/schema\/person\/29c3327567bfee7a6c42b71a5c9f0842","name":"HInd Rashed ALsheekh","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=ge4e169e0c21a5fefd7e9f6862163b1a7","url":"https:\/\/secure.gravatar.com\/avatar\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=g","caption":"HInd Rashed ALsheekh"},"sameAs":["http:\/\/wwww.taqaasoum.com"],"url":"https:\/\/tejaarhnews.com\/?author=5"}]}},"jetpack_featured_media_url":"https:\/\/tejaarhnews.com\/wp-content\/uploads\/2024\/12\/Adapted-with-edits-and-abridged-from-Louie-Peters-Towards-AI.jpeg","jetpack_sharing_enabled":true,"publishpress_future_action":{"enabled":false,"date":"2026-04-24 00:21:20","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category","extraData":[]},"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"authors":[{"term_id":143,"user_id":5,"is_guest":0,"slug":"hind","display_name":"HInd Rashed ALsheekh","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/c9f02afb6c61f17f26e5cb572f6567d2b47417bd0546e58624b4d5b764093388?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/posts\/886","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=886"}],"version-history":[{"count":1,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/posts\/886\/revisions"}],"predecessor-version":[{"id":890,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/posts\/886\/revisions\/890"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=\/wp\/v2\/media\/889"}],"wp:attachment":[{"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=886"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=886"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=886"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/tejaarhnews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fppma_author&post=886"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}