AI – Chrife.com.gh https://chrife.com.gh Everyday news from a Christian Fellow Thu, 09 May 2024 13:40:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.5 https://chrife.com.gh/wp-content/uploads/2018/09/favicon-1-75x75.png AI – Chrife.com.gh https://chrife.com.gh 32 32 151839082 The Important Difference Between Generative AI And AGI https://chrife.com.gh/the-important-difference-between-generative-ai-and-agi/ Thu, 09 May 2024 13:37:57 +0000 https://chrife.com.gh/?p=7418 In the fast-evolving landscape of artificial intelligence, two concepts often spark vigorous debate among tech enthusiasts: Generative AI and Artificial General Intelligence (AGI). While both promise to revolutionize our interaction with machines, they serve fundamentally different functions and embody distinct potential futures. Let’s dive into these differences and explore what each form of AI means […]

The post The Important Difference Between Generative AI And AGI appeared first on Chrife.com.gh.

]]>

In the fast-evolving landscape of artificial intelligence, two concepts often spark vigorous debate among tech enthusiasts: Generative AI and Artificial General Intelligence (AGI). While both promise to revolutionize our interaction with machines, they serve fundamentally different functions and embody distinct potential futures. Let’s dive into these differences and explore what each form of AI means for tomorrow.

What Is Generative AI?

Think of Generative AI as a highly-skilled parrot. It’s capable of mimicking complex patterns, producing diverse content, and occasionally surprising us with outputs that seem creatively brilliant. However, like a parrot, Generative AI does not truly “understand” the content it creates. It operates by digesting large datasets and predicting what comes next, whether the next word in a sentence or the next stroke in a digital painting.

For example, when Generative AI writes a poem about love, it doesn’t draw on any deep, emotional reservoirs; instead, it relies on a vast database of words and phrases typically associated with love in human writing. This makes it excellent for tasks like drafting articles on global economics or generating marketing copy, as it can convincingly mimic human-like prose based on the information it’s trained on. However, it lacks the ability to grasp complex human experiences or perform tasks it hasn’t been specifically programmed to handle, such as managing your taxes or strategizing economic policies.

Artificial General Intelligence (AGI): The Next Frontier

AGI, or Artificial General Intelligence, represents a theoretical leap in the field of AI, aiming to create machines that do far more than perform tasks—they would understand, innovate, and adapt. The concept of AGI is to mimic human cognitive abilities comprehensively, enabling machines to learn and execute a vast array of tasks, from driving cars to making medical diagnoses. Unlike anything in current technology, AGI would not only replicate human actions but also grasp the intricacies and contexts of those actions.

However, it’s crucial to understand that AGI does not yet exist and remains a subject of considerable debate and speculation within the scientific community. Some experts believe the creation of AGI could be just around the corner, thanks to rapid advancements in technology, while others argue that true AGI might never be achieved due to insurmountable ethical, technical, and philosophical challenges.

Technical Challenges Facing AGI

The development of AGI faces numerous technical hurdles that are fundamentally different and more complex than those encountered in creating generative AI. One of the primary challenges is developing an understanding of context and generalization. Unlike generative AI, which operates within the confines of specific datasets, AGI would need to intuitively grasp how different pieces of information relate to each other across various domains. This requires not just processing power but a sophisticated model of artificial cognition that can mimic the human ability to connect disparate ideas and experiences.

Another significant challenge is sensory perception and interaction with the physical world. For AGI to truly function like a human, it would need to perceive its environment in a holistic manner—interpreting visual, auditory, and other sensory data to make informed decisions based on real-time inputs. This involves not only recognizing objects and sounds but understanding their significance in a broader context, a task that current AI systems struggle with.

Additionally, AGI must be able to learn from limited information and apply this learning adaptively across different situations. This concept, known as transfer learning, is something humans do naturally but is incredibly difficult to replicate in machines. Current AI models require vast amounts of data to learn effectively and are generally poor at applying what they’ve learned in one context to another without extensive retraining.

Key Distinctions Between Generative AI and AGI

To fully appreciate the transformative potential of AI, it’s essential to understand the fundamental distinctions between Generative AI and AGI. Here are the key differences:

  1. Capability: Generative AI excels at replication and is adept at producing content based on learned patterns and datasets. It can generate impressive results within its specific scope but doesn’t venture beyond its programming. AGI, on the other hand, aims to be a powerhouse of innovation, capable of understanding and creatively solving problems across various fields, much like a human would.
  2. Understanding: Generative AI operates without any real comprehension of its output; it uses statistical models and algorithms to predict and generate results based on previous data. AGI, by contrast, would need to develop a genuine understanding of the world around it, making connections and having insights that are currently beyond the reach of any AI system.
  3. Application: Today, Generative AI is widely used across industries to enhance human productivity and foster creativity, performing tasks ranging from simple data processing to complex content creation. AGI, however, remains a conceptual goal. If realized, it could fundamentally transform society by autonomously performing any intellectual task that a human can, potentially redefining roles in every sector.

Ethical And Societal Implications

The distinction between these technologies isn’t just technical; it’s fundamentally ethical. Generative AI, while transformative, raises questions about authenticity and intellectual property. AGI, however, prompts deeper inquiries into the nature of consciousness, the rights of sentient machines, and the potential for unprecedented impacts on employment and societal structures.

Both forms of AI demand careful regulation and foresight. The ongoing development and potential realization of AGI must be approached with a balanced perspective, considering both the immense benefits and the significant risks.

The journey from Generative AI to AGI is not merely one of increasing complexity but a paradigm shift in how we interact with machines. As we advance, understanding these distinctions will be crucial for harnessing their potential responsibly. With Generative AI enhancing our capabilities and AGI potentially redefining them, our approach to technology’s future must be as adaptive and innovative as the intelligence we aspire to create.

Source: Forbes.com, Author: Bernard Marr

The post The Important Difference Between Generative AI And AGI appeared first on Chrife.com.gh.

]]>
7418
ChemCrow: The Next Frontier in AI-Driven Chemical Synthesis https://chrife.com.gh/chemcrow-the-next-frontier-in-ai-driven-chemical-synthesis/ Wed, 08 May 2024 23:18:38 +0000 https://chrife.com.gh/?p=7398 ChemCrow, an AI developed by researchers at EPFL, integrates multiple expert tools to perform chemical research tasks with unprecedented efficiency. Chemistry, with its intricate processes and vast potential for innovation, has always been a challenge for automation. Traditional computational tools, despite their advanced capabilities, often remain underutilized due to their complexity and the specialized knowledge […]

The post ChemCrow: The Next Frontier in AI-Driven Chemical Synthesis appeared first on Chrife.com.gh.

]]>
ChemCrow, an AI developed by researchers at EPFL, integrates multiple expert tools to perform chemical research tasks with unprecedented efficiency.

Chemistry, with its intricate processes and vast potential for innovation, has always been a challenge for automation. Traditional computational tools, despite their advanced capabilities, often remain underutilized due to their complexity and the specialized knowledge required to operate them.

AI Revolution in Chemistry

Now, researchers with the group of Philippe Schwaller at EPFL, have developed ChemCrow, an AI that integrates 18 expertly designed tools, enabling it to navigate and perform tasks within chemical research with unprecedented efficiency. “You might wonder why a crow?” asks Schwaller. “Because crows are known to use tools well.”

ChemCrow was developed by PhD students Andres Bran and Oliver Schilter (EPFL, NCCR Catalysis) in collaboration with Sam Cox and Professor Andrew White at (FutureHouse and University of Rochester).

ChemCrow Conceptual Art

ChemCrow is based on a large language model (LLMs), such as GPT-4, enhanced by LangChain for tool integration, to autonomously perform chemical synthesis tasks. The scientists augmented the language model with a suite of specialized software tools already used in chemistry, including WebSearch for internet-based information retrieval, LitSearch for scientific literature extraction, and various molecular and reaction tools for chemical analysis.

ChemCrow’s Capabilities

By integrating ChemCrow with these tools, the researchers enabled it to autonomously plan and execute chemical syntheses, such as creating an insect repellent and various organocatalysts, and even assist in discovering new chromophores, substances fundamental to dye and pigment industries.

What sets ChemCrow apart is its ability to adapt and apply a structured reasoning process to chemical tasks. “The system is analogous to a human expert with access to a calculator and databases that not only improve the expert’s efficiency, but also make them more factual – in the case of ChemCrow, reducing hallucinations,” explains Andres Camilo Marulanda Bran, the study’s first author.

Practical Applications

ChemCrow receives a prompt from the user, plans ahead how to solve the task, selects the relevant tools, and iteratively refines its strategy based on the outcome(s) of each step. This methodical approach ensures that ChemCrow doesn’t only work off theory but is also grounded in practical application for real-world interaction with laboratory environments.

By democratizing access to complex chemical knowledge and processes, ChemCrow lowers the barrier to entry for non-experts while augmenting the toolkit available to veteran chemists. This can accelerate research and development in pharmaceuticals, materials science, and beyond, making the process more efficient and safer.

Source: SciTechDaily, Author: EPFL

The post ChemCrow: The Next Frontier in AI-Driven Chemical Synthesis appeared first on Chrife.com.gh.

]]>
7398
Less burnout for doctors, better clinical trials, among the benefits of AI in health care https://chrife.com.gh/less-burnout-for-doctors-better-clinical-trials-among-the-benefits-of-ai-in-health-care/ Tue, 23 Apr 2024 18:53:09 +0000 https://chrife.com.gh/?p=7094 KEY POINTS Over the last few decades, traditional artificial intelligence has largely been in service of making healthcare safer and better (the Institute of Medicine’s 2000 report “To Err is Human”  described that nearly 100,000 people died annually of medical errors in hospitals). However, it’s only its successor — generative AI — that has made efficiency […]

The post Less burnout for doctors, better clinical trials, among the benefits of AI in health care appeared first on Chrife.com.gh.

]]>
KEY POINTS

  • Artificial intelligence has traditionally been used to make health care safer and better. Now generative AI is making efficiency a priority.
  • A recent study found that using AI to generate draft replies to patient inbox messages reduced burden and burnout scores for medical professionals, even though they spent the same amount of time on the task.
  • AI-enabled solutions are on the horizon for efficiently matching potential participants to clinical trials, expediting drug development, and completing the time-consuming aspects of translating documents for non-English speaking patients and trial participants.

Over the last few decades, traditional artificial intelligence has largely been in service of making healthcare safer and better (the Institute of Medicine’s 2000 report “To Err is Human”  described that nearly 100,000 people died annually of medical errors in hospitals). However, it’s only its successor — generative AI — that has made efficiency a priority.

Nvidia, known primarily as a hardware and chip company, has been working to optimize the health care space for 15 years. Kimberly Powell, Nvidia’s  vice president of health care, and her team build domain-specific applications for health care, including in the realm of imaging, computing, genomics and drug discovery, under the umbrella of the “Clarra Suite”.

“It’s really just taking these mini applications, wiring them up so that they can perform and deliver a valuable service to an end market,” said Powell.

Health care is one of the largest data industries, Powell says. Naturally, it’s also a massively regulated industry and must be brought to market with care.

“Some come at it from the idea that we’re late to the game. I’m not sure that’s true,” said Dr. Josh Fessel, director of the office of translational medicine at the National Institutes of Health. “You’re dealing with human beings and you have to be incredibly careful with issues of privacy, security, transparency.”

Translational medicine, Fessel’s bread and butter, is how you get from a good idea to a thing that is actually poised to help people. In that, AI is the quest of the moment.

AI is already being deployed to streamline contact centers, modernize code to make institutions cloud native and create documents to help reduce medical burnout (Adam Kay’s memoir “This Is Going To Hurt,” in which he describes what led up to his own career-ending burnout, is not an anomaly). However, the thing about document creation, says Dr. Kaveh Safavi, senior managing director for Accenture’s global health care business, is that medical professionals must learn to verbalize their findings in the exam room. “That’s all part of the reality,” he said. “The technology requires the human to change in order to gain the benefit.”

A March Study found that using AI to generate draft replies to patient inbox messages reduced burden and burnout scores in medical professionals, but didn’t reduce the amount of time they spent on this task. But time is not the only factor that’s important, Fessel says.

AI TO ADDRESS NURSING SHORTAGE

Meanwhile, AI-enabled solutions are on the horizon for efficiently matching potential participants to clinical trials, expediting drug development, and completing the time-consuming aspects of translating documents for non-English speaking patients and trial participants. Safavi says that globally, the nursing shortage is the biggest problem in health care (an Accenture report calls it a “global health emergency”), and he anticipates new technologies will begin to deploy within the next year to address this pressing concern.

Amidst all this, there are still kinks to work out. For example, the Clinical & Translational Science Award (CTSA) Program for the Mount Sinai Health System found in October that predictive models that use health record data to determine patient outlooks end up influencing the real-world treatments that providers give those patients, ultimately reducing the accuracy of the technology’s own predictions. In other words, if the algorithm does what it’s supposed to do, it will change the data — but then it operates on data that’s different from what it learned, ultimately reducing its performance. “It changes its own world, basically,” said Fessel. “It raises the question: What does continuing medical education for an algorithm look like? We don’t know yet.”

To combat knowledge gaps like this, Fessel argues for a team approach across institutions. “Sharing what we’re learning is absolutely vital,” he said. Having a chief AI officer in place at a health institution can be helpful as long as they are empowered to bring in other brains and resources, he says.

Nvidia practices this by partnering with a range of organizations to deploy “microservices,” or software that integrates into an institution’s existing applications. In addition to helping navigate evolving regulatory terrain (like looming requirements for software as a medical device, or SaMD, per the U.S. Food & Drug Administration), it makes transformation more within reach. For example, Nvidia partnered with a company called Abridge on one of its first applications, which integrates into the electronic health record system Epic to streamline medical summaries.

Meanwhile, Nvidia is collaborating with Medtronic, which uses computer vision to identify 50% more potentially cancer-causing polyps in colonoscopies. And in tandem with the Novo Nordisk foundation, they’re developing a national centre for AI innovation in Denmark that will house one of the most powerful AI supercomputers in the world.

Right now, what provider organizations are largely prioritizing is getting ready for generative AI, says Safavi. This includes getting their technology house in order to prepare for cloud-native tools that need to be able to access the data.

A HUMAN IS THE ‘LAST MILE’

This also involves developing a responsible AI posture that protects privacy and intellectual property but dissuades the use of technology for diagnosis, Safavi said. “We want the human to be the last mile of the judgment,” he said.

Safavi said his biggest fear of AI in the health care space is that organizations won’t employ policies against technological diagnosis, and something bad will happen as a result. “There’s a reason to be proactive around putting boundaries,” he said. “In the absence of that, a bad outcome is likely to result in an overly generalized regulatory schema, which none of us benefits from.”

In March, the European Union adopted the Artificial Intelligence Act, which addresses AI safeguards, biometric identification systems, social scoring and rights to complaint and transparency. Safavi has worked in about 25 countries over the last 15 years and says any regulatory system the U.S. adopts will likely reflect that of the E.U., but we’re not there yet.

Even with all these evolutions, there is still so much unknown about how various health conditions develop and the role the environment plays. “To pretend that there are no black boxes in medicine is not true,” said Fessel. Redefining how health care operates gives the field an opportunity to re-examine many fundamental ideas about how we deliver care and learn new things, he adds. “That, to me, is one of the things that makes it so potentially transformative.”

Related video:

Source: cnbc.com, Author: Rachel Curry

The post Less burnout for doctors, better clinical trials, among the benefits of AI in health care appeared first on Chrife.com.gh.

]]>
7094
University of Ghana revises plagiarism policy to include AI https://chrife.com.gh/university-of-ghana-revises-plagiarism-policy-to-include-ai/ Tue, 27 Feb 2024 21:03:41 +0000 https://chrife.com.gh/?p=6709 The University of Ghana has amended its academic integrity framework, particularly in its approach to plagiarism. A notice signed by the registrar, Emelia Agyei Mensah and dated February 26, 2024, revealed that the University Council and the Academic Board have greenlit a series of crucial updates, with a primary focus on combating plagiarism and other […]

The post University of Ghana revises plagiarism policy to include AI appeared first on Chrife.com.gh.

]]>
The University of Ghana has amended its academic integrity framework, particularly in its approach to plagiarism.

A notice signed by the registrar, Emelia Agyei Mensah and dated February 26, 2024, revealed that the University Council and the Academic Board have greenlit a series of crucial updates, with a primary focus on combating plagiarism and other forms of academic misconduct.

The revised policy, approved following recommendations by the Business and Executive Committee, reflects the institution’s unwavering commitment to upholding the highest standards of integrity and ethics in academic endeavours.

Renamed the “Policy on Plagiarism and Other Academic Misconduct,” the document delineates clear definitions of academic misconduct while outlining measures aimed at prevention and appropriate sanctions.

A notable addition to the revamped policy is the incorporation of Artificial Intelligence (AI) in academic work and research.

Acknowledging the widespread availability of AI and related software/tools for academic assistance, the University underscores the imperative of originality in scholarly pursuits.

As such, any employment of AI or associated technologies that compromises the authenticity of academic output will be deemed unacceptable, in line with the overarching ethos of academic integrity.

The new document, however, is yet to be made public.

This strategic integration of AI into the plagiarism policy signals the University’s proactive stance in addressing contemporary challenges in academic ethics.

By leveraging technological advancements while emphasizing the paramount importance of original thought and independent scholarship, the University of Ghana aims to foster a culture of academic excellence and integrity among its faculty, staff, and students.

The move reflects a forward-looking approach to navigating the evolving landscape of academic research and underscores the institution’s unwavering commitment to upholding the integrity of scholarly pursuits.

With the revised policy now in effect, stakeholders within the University community are poised to embrace these changes as part of a concerted effort to safeguard academic integrity and promote a culture of intellectual honesty.

Source: My Joy Online

Send your news, stories or articles to info@chrife.com.gh or via WhatsApp +2332550144803

The post University of Ghana revises plagiarism policy to include AI appeared first on Chrife.com.gh.

]]>
6709
AI is smart — but can it think? https://chrife.com.gh/ai-is-smart-but-can-it-think/ Fri, 19 May 2023 18:42:44 +0000 https://chrife.com.gh/?p=5869 AI is revolutionizing the future of work.

The post AI is smart — but can it think? appeared first on Chrife.com.gh.

]]>
The Turing Test Scenario

“The limits to computing are not the limits of physical device. They’re not the limits of concrete or steel or anything like that in the physical world, the limits to computing are the limits of your imagination. When the first computers began to appear in the late 1940s, early 1950s, people were fascinated by these incredibly complex machines that could do things like process huge numbers of mathematical equations incredibly quickly. And so there was a buzz at the time around these electronic brains. Lots of people thinking around could machines really be intelligent? 

So Alan Turing, I think, was one of the most remarkable people in the 20th century. What he did was he invented a beautiful test. He said, “Look, here is this test. If we ever get something that would passes it, then just stop asking that question because you can’t tell the difference.” He never really expected that anybody was seriously going to try it out, but actually people did try it out. But it’s been wildly misinterpreted, I say since then. What Turing’s most famous for is working at Bletchley Park, a code breaking center in the United Kingdom throughout the Second World War. And actually, if that was the only thing he’d done in his life, he would have a place in the history books. But almost as a side product of his PhD work, he invented computers, which are just any one of the most remarkable sort of, you know, quirks of history. And with incredible precociousness, he picked one of the biggest mathematical problems of the age, the Entscheidungsproblem, which means decision problem; whether mathematics can be reduced to following a recipe. 

So the question that Turing asked was, is it the case that for any mathematical problem that you might come up with, you can find a recipe which you can just follow in the same way that you would follow for arithmetic? Incredibly quickly, within about 18 months, Turing solved it. And the answer is no, mathematics doesn’t reduce to following a recipe. But the interesting thing is what Turing did is to solve that problem, he had to invent a machine which could follow instructions, and nowadays we call them Turing machines, but actually it’s basically a modern computer. And he was one of the first serious thinkers about AI. And so in 1950, he published what we think is the first real scientific work around artificial intelligence.

If we ever achieved the ultimate dream of AI, which I call the Hollywood dream of AI, the kind of thing that we see in Hollywood movies, then we will have created machines that are conscious potentially in the same way that human beings are. So Alan Turing, I think, was really frustrated by people saying, “Well, no, of course these machines can’t be intelligent or creative or think or reason,” and so on. So Turing’s genius was he invented a beautiful test. We call it the Turing Test in his honor. Okay, so here’s how the test goes. You’ve got somebody like me, who’s sitting at a computer terminal with a keyboard and a screen, and I’m allowed to ask questions. I type out questions on the keyboard, but I don’t know what’s on the other end, right? I don’t know whether it’s a computer program or another human being. So Turing’s genius was this. He said, “Well, look, imagine after a reasonable amount of time, you just can’t tell whether it’s a person or a machine on the other end. If a machine can fool you into not being able to tell that it’s a machine, then stop arguing about whether it’s really intelligent because it’s doing something indistinguishable. You can’t tell the difference. So you may as well accept that it’s doing something which is intelligent.”

And I think Turing never really expected that people would seriously try it out. And so there are annual Turing Test competitions across the world where people will enter computer programs, and there will be judges who will try and tell whether they’re a computer program or a human being. Most of the entries in them are like these kind of crude internet chat bots. And what these chat bots do is they just look for keywords like sad or family or lonely or drunk or something like that. And they plug that keyword into a canned response. And so they’re really just trying to fool the investigators. And I have to say, there’s not really much AI in most of those programs because they’re doing something which is really much more superficial. It’s a boom period for AI now because a very narrow class of techniques have turned out to be very successful on a wide range of problems.

Modern AI is really focused on doing very specific tasks, and in those specific tasks like playing a game of chess or something, it might be better than any living human being, but it can’t do anything else. Those programs are just tuned, very finely tuned, to do one tiny thing very well. But going back to Alan Turing’s invention, he invented machines that for following lists of instructions, but the computer can adapt those instructions. It can learn to adapt those instructions over time. And that’s basically what you’re doing in machine learning. He’s one of the founders of the field, not just of computing, but also of artificial intelligence. One of the most remarkable people of the 20th century.”

Source: bigthink.com, Author: Michael Wooldridge

The post AI is smart — but can it think? appeared first on Chrife.com.gh.

]]>
5869