Monday, August 5, 2024

Mystery of Q-Star | The AI which threatens Humanity | Open AI | Microsoft

 

Mystery of Q-Star | The AI which threatens Humanity | Open AI | Microsoft


Hello, friends! Exactly one year ago, ChatGPT was launched to the public, and since then, it has created a profound impact on the world. "ChatGPT!" People found it difficult to believe that an Artificial Intelligence software could be this powerful. The company that developed it, OpenAI, and its CEO Sam Altman, have become symbols of the ongoing AI revolution.

However, for the last few days, strange events took place in the company. All of a sudden, the Board of Directors of the company Sam Altman was fired! When the other employees found out about this, they caused an uproar. Many people threatened to resign. Over the next four days, the CEO changed at OpenAI thrice.

But the most shocking thing was that the root cause of this turmoil is said to have been a mysterious AI named Q-Star. Yes, you heard it right. A few days before Sam Altman' was fired, researchers within the company penned a letter to the board of directors, the letter was a warning. The letter disclosed the development of a highly potent AI that posed a potential threat to humanity.

This AI not only excelled in solving complex mathematical and scientific problems but also has the capability to predict future events to some extent. This AI is internally referred to as Q-Star. This video aims to delve into this mystery. "-Why should we trust you? -Um, you shouldn't." "This morning, a blockbuster shake-up in the world of Artificial Intelligence.

" "Sam Altman, the co-founder of OpenAI being forced out by the Board." "Altman once chose Microsoft." And he chose Microsoft again." "Yet another twist in the Sam Altman saga at OpenAI." This year, Tom Cruise's latest Mission Impossible film was released, "Dead Reckoning Part 1," friends, if you remember, the main antagonist in this film was not a human but an Artificial Intelligence software.

In this film, this AI software was named Entity. Entity was so powerful that it was omnipresent. It could manipulate humans however it wanted, and mathematically, it could predict the future through the mass surveillance. Predicting the future didn't mean that it could provide a 10-year or 15-year outlook, instead, in any particular situation, it could predict the future outcome for the next day or the next week.

This AI excelled not just in 2-3 specific tasks; it outperformed humans in nearly all tasks it undertook. This form of Artificial Intelligence is known as AGI, or Artificial General Intelligence. "AGI, a computer system that can do any job or any task that a human does but only better." The famous AI softwares present today like the Language Learning Models like ChatGPT, or generative AI tools like MidJourney, are classified into the category of Weak AI.

Because these softwares are considered to be weak. They excel in specific tasks, they execute the tasks for which they were trained better than humans, but they specialise in only those tasks. If a robust AI like AGI exists it would be able to perform various tasks better than humans. But at this point in time, Strong AIs do not exist.

OpenAI was established as a non-profit in 2015. It had a singular mission: to develop an Artificial General Intelligence or AGI, for the benefit of humanity. The company's website clearly outlines its mission. "To ensure that Artificial General Intelligence benefits all of humanity." Prominent tech entrepreneurs came together to create this company, including Sam Altman and Elon Musk.

There were 10 co-founding members of OpenAI in total. The current chief scientist, Ilya Sutskever, and the President of OpenAI, Greg Brockman were also included. Remember these names because they play a crucial role in this saga. All co-founders collectively pledged $1 billion to OpenAI. In 2019, four years after its incorporation, Sam Altman assumed the role of CEO.

Four years after that, on November 17, 2023, the board of directors unexpectedly fired Sam. In case you don't know, friends, there is often a Board of Directors in large corporations, in most cases, they hold the authority to appoint or dismiss the CEO. As per the rules, it is normal for the Board to have this authority.

In this case, OpenAI's Board comprised six members, I have already named 3 of them. Sam, Ilya, Greg, and three independent executives: Adam, the CEO of Quora, Tasha McCauley, a Tech Entrepreneur, and Helen Toner. Decisions are typically made through majority voting, and in this scenario, Sam wouldn't fire himself obviously.

So only five directors remain. Apart from firing Sam, the board of directors dismissed Greg too. This suggests that Greg likely did not vote against Sam. With only four individuals remaining, namely Ilya, Adam, Tasha, and Helen, these four informed Sam of their decision to fire him via a Google Meet call.

The public was not provided with much information about the reason behind this. The board vaguely mentioned issues with Sam's communication and hinted that he may have been concealing information. After their termination, neither Sam nor Greg issued any statements explaining the circumstances. Sam expressed his disappointment on Twitter.

This was a shocking decision in the tech world. The abrupt firing of such a powerful and influential CEO by four members of the Board. What was the undisclosed reasons behind this? As mentioned earlier, OpenAI was initially established as a non-profit, something that is very unique and important. Because the other popular companies in Silicon Valley are all for-profit.

Facebook, Meta, Google, Microsoft, and Apple, develops products and sells their services to earn money. They operate to make profits. But OpenAI's primary objective was to develop an AGI for the benefit of humanity, this was more of a research facility. It was outlined in its charter that the company's duty is towards humanity.

Neither towards the investors, nor towards its employees. However, this non-profit model was short-lived. In 2019, the year Sam Altman assumed the role of CEO, OpenAI introduced a for-profit subsidiary company named OpenAI Global LLC. This subsidiary operates on a capped profits model, The profit earning would be capped with a limit.

They limited returns to 100 times the initial investment. Any investors investing in this for-profit company would get 100 times the return on their investment at most. Any excess profits earned by this company would be directed back to the non-profit parent company. In 2019, OpenAI's for-profit subsidiary secured its first funding from Microsoft.

A monumental $1 billion investment from Microsoft. Over the following four years, as the subsidiary gained widespread recognition, it successfully attracted a total of $13 billion in investments. Presently, Microsoft is reported to hold a 49% stake in OpenAI's for-profit arm. Before delving into Microsoft's involvement, it's crucial to understand that while establishing a for-profit subsidiary, it was asserted that the primary control of this subsidiary would remain with the non-profit main company, OpenAI.

However, these conflicting arguments raise a major question: how should the balance between for-profit and non-profit activities be struck? Furthermore, once AGI is developed, how much of it should be commercialized and how much should be designated as non-profit? If profit is prioritised in all aspects it would have an adverse effect on the world.

As exemplified by Facebook, whose algorithms prioritized profit over users' well-being, whether it involves mental health issues, addiction among users, spreading hate speech on the platform, whether it causes riots in the world. "Facebook, in India, has been selective in curbing hate speech, misinformation and inflammatory posts, this is according to leaked documents obtained by the Associated Press.

" Two years ago, there was an expose named the Facebook Papers. I talked about it in this video. Some people are worried that AI would do the same. Before Sam was fired, some insiders revealed that researchers working at OpenAI wrote a letter to the Board of OpenAI. This letter expressed concerns about Q*, the AI they are developing, that could be a significant step toward achieving AGI.

But they were concerned about the potential of Q-Star. The exact capabilities of Q-Star are known only to those researchers and employees within the company, but conceptually, Q-Learnig is an AI concept that falls within the realm of reinforcement learning. Reinforcement learning is an AI training approach.

This technique involves AI learning through human feedback, continually improving its understanding of the environment and decision-making. Those of you who have complete my course on ChatGPT, would be familiar with this. And for those unfamiliar I'd like to tell you, I've created a 4.5-hour course on ChatGPT, encompassing both theory and practical aspects.

In it I teach you how you can benefit form using AI to take your life and your career to the next level. There are 6 chapters in the course. In the first chapter, I have explained the basics of Reinforcement Learning. In the second chapter, I have explained Prompt Engineering, Tokens, and the technology behind ChatGPT.

But from the third chapter, you will realize that this is a tool that should be used every day. Whether you have to study for exams. Studying can be 2-3 times more efficient. You can use it to get motivation. If you need advice on anything, ChatGPT can help you. The fourth chapter focuses on daily life uses.

For example, if you have to go somewhere you won't have to spend money on travel agents for travel planning. ChatGPT can be your travel agent for free. ChatGPT can help you with food and diet planning too. The fifth chapter is my personal favourite because it focuses on business owners. How you can use it to increase sales in your business, to handle customer feedback, and for marketing your business too.

This is why I use ChatGPT daily and you won't believe how much my productivity has increased. And this entire course is a one-time purchase. You get lifetime access to watch these videos. And there's another good news since this technology is improving exponentially, I'll add free updates on this course within a few months.

Those of you who have already purchased this course, check it out after 2 months. New lessons will be added to this course. And those of you who have not joined yet, the link is embedded in this QR code or you can get it in the description or pinned comment. Use the coupon code AGI40 to get 40% off. This will be available for only the first 400 people.

You can check it out. I am sure you will find it very useful. Now, returning to Q*, it is named after a Q-value function, This function is denoted as this [Q(S,A)] where S represents the state and A signifies the action. Q* is a function that encapsulates the most optimal point. To illustrate, consider a game of chess.

Imagine your chess piece occupying a specific square on the board its position would be termed state S. The subsequent move you intend to make in chess would be referred to as action A. Its Q-value function would predict all possible scenarios. All the potential outcomes for every possible move you could make.

Following a thorough analysis, the function identifies the best possible move based on your current position. This best and most optimal move derived by the Q-value function, is termed Q-star in mathematical language. This would be the best move for you to play in any given situation in this game of chess.

I used the chess board as an example. But the same can be done with anything in the world. Picture driving on a highway where AI can predict the speeds and driving styles of surrounding cars. By analyzing these factors, it can forecast how these cars might move in the next few seconds based on which it can provide you with the most effective driving instruction.

Similarly, envision Q* AI analyzing every possible scenario before an election, predicting potential outcomes of the election. If OpenAI's mysterious Q* possesses such capabilities, it has the potential to make significant predictions about the future. the things in the Mission Impossible film, might just be possible.

Predicting human thought patterns. analysing all possible decisions in any given moment, accurately predicting how a certain person might make decisions in specific situations. From business dealings to political moves, this Q* AI has the potential to influence everything. While humans can do this to some extent, however, AGI can surpass human capabilities since humans have their biases and are emotional.

We make decisions based on our emotions. But the AI would make its predictions solely based on mathematics. In a chess game, for instance, after evaluating all possible moves, this AI can provide the most optimal move. This AI would be able to give you the most accurate predictions. For now, ChatGPT excels at writing and language translation by predicting the next word.

However, integrating Q-value learning into this AI would elevate its capabilities to provide optimal answers to any and all questions. These are some speculations about the mysterious Q* AI. Only the people working in OpenAI know how well it is progressing. And how close it is to becoming and AGI. The individuals who raised concerns about Q* were apprehensive about the direction OpenAI was heading.

Their fear stemmed from the potential imbalance between the benefits and harms to humanity. Although the board of directors cited a vague reason for Sam's dismissal, the underlying issue revolved around conflicting ideologies. On one side, there is the for-profit group advocating commercialization On the other side, the non-profit group is more concerned about the potential threats Which of these ideologies is right? That is an ongoing debate.

Advocates for for-profit argue that substantial funds are necessary for better technological advancement from investors without which it won't be possible to succeed. Non-profit supporters, on the other hand, express concerns that a profit-driven approach could compromise the original mission and pose significant risks to humanity.

A few days before this controversy began, Microsoft's President Brad Smith said this at a conference on 10th November. "Which would you have more confidence in? Getting your technology from a non-profit, or a for-profit company that is entirely controlled by one human being?" Here, he was indirectly talking about Mark Zuckerberg.

But the intricate balance between both sides in OpenAI, was deteriorating for some time. In February 2023, ChatGPT+ was introduced as the first paid version, followed by the launch of an API Application Programming Interface on March 1st, allowing other companies to integrate ChatGPT into their systems.

On March 14th, GPT-4 was unveiled. Employees noted a shift towards hyper- commercialization in the preceding months, creating a divide between the two factions. After ChatGPT's release, the path to revenue and profits were evident. They couldn't continue being an idealistic research lab. They had customers and had to serve those customers.

In October 2023, OpenAI introduced its advanced image generator, DALL-E3, integrating it with the paid versions of ChatGPT. Subsequently, on November 6th, the company hosts its first developer conference, "Welcome to our first-ever OpenAI Dev Day. Today, we got about 2 million developers building on our APIs for a wide variety of use cases, d

oing amazing stuff..." During this conference, Sam delivers a presentation reminiscent of those from Apple or Google, announcing the capability to create custom-built models of ChatGPT, they were referred to as the GPTs. This shift expands from a singular ChatGPT to numerous customized GPTs, We'll talk about this in greater detail in the course update.

But here, with the increasing commercialization, on one side were Sam and President Greg advocating for and encouraging commercialization, while on the other side, Chief Scientist Ilya and others were feeling uncomfortable due to this. Ilya was driven by a strong commitment to AI safety. At one point, he had told his employees, that he was worried about AGI systems treating humans similarly to how humans treat animals today.

According to Ilya, AGI is not a distant prospect; and that we will see AGI in action in the near future, so we need to be prepared for it. "More and more people see what AI can do. And where it is headed towards. Then it will become clear how much trepidation's appropriate." In July OpenAI announced the formation of a super-alignment team dedicated to AI safety techniques, led by Ilya.

The company allocates 20% of its computer chips exclusively for this purpose, emphasizing safety and AI alignment. By August-September, a clear dichotomy emerged between two distinct factions within OpenAI. They were working towards opposite directions. Sam was focusing on upcoming launches and the next big thing talking about GPT-5, while Ilya concentrated on enhancing AI safety within the company, outlining necessary precautions.

While Sam pursued raising billions in investment for accelerated development, Consequently, the other four members of the Board of Directions in OpenAI, were leaning towards Ilya's conservative approach. Upon receiving the letter about the development of the powerful Q*, the board recognized the potential benefits of removing the for-profit section from the company.

Following Sam's dismissal, the board swiftly appoints Meera Murati as the interim CEO on November 17th. However, by the morning of November 19th, a stir ensued among the company's employees, with a significant majority supporting Sam. Microsoft found itself in a delicate situation, holding a 49% stake in the for-profit subsidiary, so they didn't want the disintegration of the comapny.

Microsoft started pressuring the Board to reinstate Sam as the CEO. Negotiations between the board and Sam took place on November 19th, when Sam had to use a guest pass to enter the OpenAI office, fueling speculation that he might be back as the CEO. But that didn't happen. The next day, OpenAI announced that ex-CEO of Twitch, Emmett Shear, was the new CEO of OpenAI.

On the same day, Microsoft's CEO, Satya Nadella announced the creation of a new advanced AI research team at Microsoft, led by Sam and Greg. Satya Nadella's move catapults Microsoft's stock price to a record high. And people started wondering: how would OpenAI's new CEO Emmett Shear react? *Should I just leave?* Amidst the uncertainty, news breaks that 743 out of the company's 770 employees wrote a letter threatening resignation if Sam and Greg weren't reinstated.

Over 90% of the workforce threatened to resign. Mira Murati, the interim CEO, was the first to sign the letter, this was followed by a wave of tweets from employees emphasizes that "OpenAI is nothing without its people." That's true. if the CEO of the company is fired and 90% of the employees leave, the company becomes worthless.

Then came the biggest twist in this story. Ilya signed the letter and posted this tweet. Ilya realized that the company's survival is crucial for implementing safety precautions. As a result, three co-founders, Sam, Ilya, and Greg, unite. However, the three independent board members remain steadfast.

Adding to the complexity, the new CEO, Emmett, also threatens to resign, demanding clarity on the reasons behind Sam's termination. In the aftermath of these events, it became evident that there was no alternative. Consequently, on November 21st, Sam was reinstated as the CEO of OpenAI. At the end, the three independent directors were powerless against the CEO, co-founders, and employees of the company.

After Sam was reinstated, 2 out of these 3 board members were fired. Leaving only Adam D'Angelo. Both board members who left were women, and two new members joined the board: Brett Taylor, former co-CEO of Salesforce, and Larry Summers, former Secretary of the Treasury. The new board's initial task is to appoint a larger board with nine members.

Sam posted a tweet, stating, "I love OpenAI, and everything I've done in the past few days has been in service of keeping this team and its mission together. When I decided to join Microsoft..., it was clear that was the best path for me... (but) I'm looking forward to returning to OpenAI w

ith the new Board and with Satya's support..." From Satya Nadella's perspective, in this situation, regardless of the outcome, Microsoft stood to benefit. even if OpenAI couldn't survive as a company, a new department created in Microsoft would have absorbed almost all the employees, CEO, and co-founders of OpenAI. They could've continued with their projects without issues.

But now that OpenAI has survived this turmoil, they have an established partnership with Microsoft, and Microsoft would continue benefitting. Satya Nadella tweeted: "We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a

It is expected that Satya Nadella might join OpenAI's board. While the company is currently secure, the question remains: will OpenAI lean more towards a for-profit approach or maintain its non-profit values? And how would it impact the AGI development? Only time can provide the answers. One certainty is that AI isn't going anywhere.

Artificial Intelligence has become an integral part of our world, and the sooner you adapt to using it, the easier it would be for you to stay ahead in this rapidly changing world. The link to the course is provided in the description below. And now if you want to know how the world is changing with the advent of AI, I have explained it in this video.

 

0 comments:

Post a Comment