Tag: AI

  • How Apple’s M4, M4 Pro, And M4 Max Chipsets Transform Performance For Entrepreneurs, And Developers In AI Era? ,

    Apple New Chipsets: Apple’s M4, M4 Pro, and M4 Max chipsets set new standards for professional performance, combining power efficiency with advanced AI capabilities. It is built on cutting-edge 3-nanometer technology, these chips provide unmatched speed and multitasking, catering to the demands of entrepreneurs, data scientists, 3D artists, composers and developers, making complex workflows faster and more efficient than ever.

    According to Apple, the CPUs across the M4 family feature the world’s fastest CPU core, delivering the industry’s best single-threaded performance, and dramatically faster multithreaded performance.

    The GPUs build on the breakthrough graphics architecture introduced in the previous generation, with faster cores and a 2x faster ray-tracing engine. M4 Pro and M4 Max enable Thunderbolt 5 for the Mac for the first time, and unified memory bandwidth is greatly increased — up to 75 percent.

    Adding further, the M4, M4 Pro, and M4 Max chipsets are designed with Apple Intelligence at their core, marking a new era for Mac. Apple Intelligence integrates powerful generative models with pioneering privacy features, leveraging Apple silicon and the Neural Engine to enable users to work, communicate, and create in entirely new ways.

    “Combined with a Neural Engine that’s up to 2x faster than the previous generation and enhanced machine learning (ML) accelerators in the CPUs, the M4 family of chips brings incredible performance for pro and AI workloads,” said the company.

    The M4 features an up to 10-core CPU, with four performance cores and up to six efficiency cores. It’s up to 1.8x faster than M1, so multitasking across apps like Safari and Excel is lightning fast. A 10-core GPU provides incredible graphics performance, up to 2x faster than M1, making everything from editing photos to AAA gameplay exceptionally fast and smooth.

    M4 supports up to 32GB of unified memory and has a higher memory bandwidth of 120GB/s. M4 Pro takes the advanced technologies debuted in M4 and scales them up for researchers, developers, engineers, creative pros, and other users with more demanding workflows.

    It features an up to 14-core CPU consisting of up to 10 performance cores and four efficiency cores. It’s up to 1.9x faster than the CPU of M1 Pro, and up to 2.1x faster than the latest AI PC chip.

    M4 Pro supports up to 64GB of fast unified memory and 273GB/s of memory bandwidth, which is a massive 75 per cent increase over M3 Pro and 2x the bandwidth of any AI PC chip.

    M4 Max is the ultimate choice for data scientists, 3D artists, and composers who push pro workflows to the limit. It has an up to 16-core CPU, with up to 12 performance cores and four efficiency cores. It’s up to 2.2x faster than the CPU in M1 Max and up to 2.5x faster than the latest AI PC chip.

    M4 Max supports up to 128GB of fast unified memory and up to 546GB/s of memory bandwidth, which is 4x the bandwidth of the latest AI PC chip. This allows developers to easily interact with large language models that have nearly 200 billion parameters.

    M4, M4 Pro, and M4 Max are built for Apple Intelligence, according to the company. (With IANS Inputs)

  • MediaTek Unveils Dimensity 9400 Chip For Latest AI Experiences |

    New Delhi: Chip-maker MediaTek on Wednesday launched Dimensity 9400, the new flagship smartphone chipset optimized for edge-AI applications, immersive gaming, incredible photography and more. The first smartphones powered by the Dimensity 9400 chip will be available in the market, starting in Q4, said the company.

    The Dimensity 9400, the fourth and latest in MediaTek’s flagship mobile SoC lineup, offers a boost in performance with its second-generation ‘All Big Core’ design built on Arm’s v9.2 CPU architecture, combined with the most advanced GPU and NPU for extreme performance in a super power-efficient design.

    Joe Chen, President at MediaTek, said the new chip will continue furthering “our mission to be the enablers of AI, supporting powerful applications that anticipate users’ needs and adapt to their preferences, while also fueling generative AI technology with on-device LoRA training. and video generation”.

    The Dimensity 9400 offers 35 per cent faster single-core performance and 28 per cent faster multi-core performance compared to MediaTek’s previous generation flagship chipset, the Dimensity 9300.

    According to the company, built on TSMC’s second-generation 3nm process, the Dimensity 9400 is up to 40 per cent more power-efficient than its predecessor, allowing users to enjoy longer battery life.

    “As the fourth-generation flagship chipset, the Dimensity 9400 continues to build on our momentum of steady growth in market share, and MediaTek’s legacy of delivering flagship performance in the most efficient design for the best user experiences,” Chen added.

    To allow users to take advantage of the latest generative AI applications, the Dimensity 9400 offers up to 80 per cent faster large language model (LLM) prompt performance while also being up to 35 per cent more power efficient than the Dimensity 9300.

    The company said it is working with developers to offer a unified interface between AI agents, third-party APKs, and models that efficiently run both edge AI and cloud services.

  • 1.77 Crore Mobile Connections Disconnected, 45 Lakh Spoofed Calls Blocked: Center |

    New Delhi: The Center on Friday informed that 1.77 crore mobile connections have been disconnected so far which used fake or forged documents, by using artificial intelligence (AI)-based tools. Moreover, four telecom service providers (TSPs) have successfully implemented an advanced system in collaboration with Department of Telecommunications (DoT), blocking 45 lakh spoofed international calls so far from entering the Indian telecom network.

    “The next phase, involving a centralized system that will eliminate the remaining spoofed calls across all TSPs, is expected to be commissioned shortly,” said the Ministry of Communications. DoT has introduced an advanced system designed to identify and block incoming international spoofed calls before they can reach Indian telecom subscribers.

    This system is being deployed in two phases — first at the TSP level to prevent calls spoofed with phone numbers of their own subscribers, and second, at a central level, to stop calls spoofed with the numbers of subscribers from other TSPs. As part of the action on 1.77 crore mobile connections, the Center disconnected 33.48 lakh mobile connections and blocked 49,930 mobile handsets used by cyber criminals in cyber-crime hotspots/districts of the country.

    About 77.61 lakh mobile connections exceeded the prescribed limits for an individual have been disconnected and 2.29 lakh mobile phones involved in cyber-crime or fraudulent activities have been blocked. About 12.02 lakh out of 21.03 lakh reported stolen/lost mobile phones have been traced, and DoT and TSPs disconnected about 20,000 entities, 32,000 SMS headers and 2 lakh SMS templates involved in sending malicious SMSs.

    “About 11 lakhs accounts have been frozen by the banks and payments wallets which were linked to disconnected mobile connections taken on fake/forged documents,” informed the ministry. Nearly 11 lakhs WhatsApp profiles/accounts have been disengaged by WhatsApp which were linked to disconnected mobile connections taken on fake or forged documents.

    The DoT informed that 71,000 Point of Sale (SIM Agents) have been blacklisted so far and 365 FIRs have been registered in multiple states and UTs.

  • OpenAI CTO Mira Murati Resigns As Top Executive Departures Continue In The AI Firm–CEO Sam Altman Says THIS | Economy News

    New Delhi: OpenAI’s chief technology officer (CTO) Mira Murati has announced that she’s leaving artificial intelligence company. Mira had served a few days as its interim CEO during a period of turmoil last year.

    In a post that she shared on X, Mira Murati wrote that after much reflection she has “made the difficult decision to leave OpenAI”.

    “I’m stepping away because I want to create the time and space to do my own exploration,” she said.

    I shared the following note with the OpenAI team today. pic.twitter.com/nsZ4khI06P
    — Mira Murati (@miramurati) September 25, 2024

    Hours after Mira’s post, Altman shared another note stating that two other top level executives have decided to depart OpenAI.

    “I also want to share that Bob and Barret have decided to depart OpenAI. Mira, Bob, and Barret made these decisions independently of each other and amicably, but the timing of Mira’s decision was such that it made sense to now do this all at once, so that we can work together for a smooth handover to the next generation of leadership,” wrote Altman.

    i just posted this note to openai:

    Hi All–

    Mira has been instrumental to OpenAI’s progress and growth the last 6.5 years; she has been a hugely significant factor in our development from an unknown research lab to an important company.

    When Mira informed me this morning that…
    — Sam Altman (@sama) September 26, 2024

    He also praised Mira for being instrumental to OpenAI’s progress and growth in “the last 6.5 years; she has been a hugely significant factor in our development from an unknown research lab to an important company.”

    Murati was made the company’s interim CEO late last year after the board of directors fired CEO Sam Altman, that sent shock waves in the AI industry. The company later brought in another interim CEO before restoring Altman to his leadership role, also replacing majority of the board members that ousted him.

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe—from healthcare to the tech and creative industries—by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI increasing the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks On Social Media Platforms?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    The Existential Risk of AI: Should We Be Concerned?

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI dominance. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it. is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short In Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, “We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Is It Time For Governments To Regulate AI?

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values ​​and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics A Danger To Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the color of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment.

  • Is AI The Real Threat To Jobs, Privacy? Expert Sheds Light On Critical Aspects |

    New Delhi: AI is revolutionizing industries around the globe—from healthcare to the tech and creative industries—by automating tedious tasks and opening doors to new opportunities. While concerns about job displacement exist, AI offers avenues for growth through upskilling and the creation of roles that didn’t exist before.

    Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that this technology realizes humans’ best interests. As AI evolves, it transforms the global vista while finding a balance between progress, safety, and opportunity.

    In a recent email interview, Anand Birje, the CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

    How Is Generative AI Impacting Job Creation?

    AI is reshaping the job landscape, but it is not a simple story of replacement. We can see major shifts in healthcare, tech, creative fields and every vertical with AI increasing the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may get phased out, AI will also create entirely new roles, responsibilities and positions that currently do not exist.

    For enterprises as well as individuals, the key to navigating these times of change is adaptation. According to him “We need to focus on training people and create a culture where upskilling and reskilling are constant. This cultural shift requires a change in individual mindset and must form an essential part of change management strategies for enterprises”.

    Forward-looking enterprises are already helping their people realize and appreciate the true scale of change being brought by AI–and the challenges, but also the opportunities this presents for them to progress in their careers.

    AI is not the existential threat to jobs that many fear, however, it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. You can draw a parallel with the wheel.

    Humans could and did travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened new avenues of progress for our civilization.

    End-to-End Encryption Fails to Prevent Data Leaks?

    Trust in social media platforms nowadays is a big issue right now, affecting millions of users globally, including all of us. Encryption helps, but it is not enough; it’s just one piece of a complex puzzle. What we need is a multilayered approach that involves transparency, compliance, and accountability. Recent times have seen a shift in this direction, with companies disclosing the geographical location as well as how they plan to leverage user data.

    As for regulations, we need to find the right balance. According to him, “We need frameworks that protect users while still allowing for technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and safeguard user privacy while leaving room for innovation and creativity”.

    The tech industry must step up and adopt a ‘privacy by design’ approach. This means building guardrails into products and services from the ground up, not as an afterthought.

    This is truer than ever in a world where AI is being leveraged for identity theft, misinformation, and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators, and users themselves, and this is a key factor to consider as we redesign digital channels to adapt to an AI world.

    Should We Be Concerned About AI’s Existential Threat?

    We should take these warnings seriously. But it is also crucial to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not sci-fi scenarios of AI dominance. They are more subtle – things like AI bias, privacy breaches, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

    To address these, we need collaboration. It is not something any one company or even one country can solve alone. According to him, “We need governments, tech firms, and academics working together to ensure that standards for ethics, transparency and compliance are set for areas that involve AI usage. Public education in the benefits of AI as well as the pitfalls associated with it. is also important, to ensure safe use”.

    But here is the thing–while we work on these risks, we cannot forget the good AI can do. It is a powerful tool that could help solve big global problems. We need to be careful with AI, but also hopeful about what it can achieve. This is a big challenge for our generation, and we need to step up to it.

    Where Government Falls Short in Addressing Digital Fraud?

    Online financial fraud is a growing concern. While the government has made efforts, we are still playing catch-up. The main challenge is speed – cybercriminals move fast, and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale, and speed.

    Regulatory bodies and government agencies must work together with technology companies and bring the best technological talent to bear against cybercrimes. According to him, “We need to think outside the box, for instance, build a real-time threat sharing platform between technology companies and government agencies that can help identify and stop financial cybercrime in its tracks”.

    We also need a more proactive strategy and an update to the legal framework. Conventional laws are ill-equipped to deal with modern cybercrime and this can lead to apathy or lack of speed when addressing it.

    Digital literacy is crucial too, many frauds succeed simply because people are not aware of the risks. This holds true for a country like India, where widespread internet penetration to rural areas and so to the majority of the population is a new phenomenon.

    To sum up, the risk of AI being used for financial cybercrime is very real. To combat it effectively, we need better technology, smarter regulation, improved education, and closer collaboration across sectors.

    Should Governments Regulate AI?

    In my view, some level of government oversight for AI is not just advisable, but necessary. Ideally created through public-private partnerships, this oversight is needed to ensure safety and ethical usage of AI even as the technology quickly becomes ubiquitous in our drive to infuse creativity and innovation across work streams.

    We need a framework that is flexible and adaptable and focuses on transparency, accountability, and fairness. The regulatory approach would depend heavily on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

    For instance, an AI being used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

    According to him, “We also need to think about AI’s broader societal impact and take active steps to address issues like job displacement and data privacy. By keeping them firmly in our sights, we can ensure that the policies being developed to regulate AI are in the best interest of the public and align with our values ​​and human rights”.

    Effective AI regulation will require ongoing dialogue between policymakers, industry leaders, and the public. It is about striking the right balance between innovation and responsible development, harnessing the technology’s full potential while protecting our civilization from its side-effects.

    Are AI and Robotics a Danger to Humanity?

    Look, ‘Terminator’ makes for great entertainment, but we are far from that reality. AI for the first time can make decisions and has evolved from ‘tools’ to ‘agents’ and the real and immediate risks are not around AI taking over the world but how humans might misuse the massive potential that it brings to the table. At present, we should be more concerned about the use of AI for privacy invasions, autonomous weapons, misinformation, and disinformation.

    According to him, “We are at a crucial point in shaping its development, a few moments before the technology becomes ubiquitous. We need to prioritize safety and global governance frameworks, create clear ethical guidelines and failsafe mechanisms, invest in AI literacy, and keep humans in control of critical decisions”.

    Prevention is about being proactive. The goal should be to use AI wisely. We should not fear it, but we do need to guide it in the right direction. It is all about finding that sweet spot between progress and responsibility.

    How Vulnerable Are AI Military Systems To Cyberattacks?

    This is an important question. As AI gets integrated more closely with our existing infrastructure, there are a few areas where it has the potential to cause the most chaos. According to him, AI in military systems is one of these areas that requires us to tread with extreme caution.

    From data poisoning to manipulate decisions and adversarial attacks to theft of sensitive data and unauthorized access, there are many ways AI integration can lead to vulnerabilities and challenges for the military and cause significant damage in the process.

    For instance, evasion attacks can be used to change the color of a few pixels in a way that is imperceptible to the human eye. However, AI will now misclassify the images and do so with confidence. This can be used to attack AI systems involved in facial detection or target recognition, to disastrous consequences.

    So how do we tackle this? We need best-in-class cybersecurity and robust AI systems that can explain their decisions for human verification. This is an area where government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, bring in Zero Trust Architecture for sensitive digital infrastructure and involve humans in the decision-making process for important situations.

    AI should support military decision-making, not replace human judgment.

  • Honda Partners With IIT Delhi And Bombay For AI-Powered Driver Assistance And Automated Driving Research |

    New Delhi: Japanese auto major Honda on Wednesday said it has started joint research on AI technologies with IIT Delhi and IIT Bombay with plans to develop driver assistance and automated driving technologies applicable in various regions of the world, including India.

    The joint research is aimed to further advance Honda CI (Cooperative Intelligence)– the original Honda AI that enables mutual understanding between machines and people, the company said in a statement.

    Honda Cars India Ltd (HCIL), a Honda subsidiary in India, will sign a joint research contract with the two IITs. “The IITs are a home to a large number of excellent researchers and engineers, and through the joint research with those institutes, Honda will strive to advance the underlying technologies of CI, with an eye toward the future applications for technologies that reduce traffic collisions and enable automated driving,” it said.

    With an aim to achieve further advancement of CI, Honda and IITs have set joint research themes such as recognition of the surrounding environment and cultivation of cooperative behavior, and will conduct research and development while utilizing the cutting-edge AI technologies, it added.

    Under the partnership, for each research theme, Honda associates and IIT professors will engage with IIT students for planning, designing, developing and testing technologies which work beyond the confines of the laboratory and thereby proceed with the research and development more flexibly and with a high degree of freedom, Honda said.

    “This will enable Honda and IITs to work in a more flexible environment with deeper exchange of academic and industry insights,” it added.

    In addition, as part of this research, Honda with the help of IITs is aiming to conduct verification of driving assistance and automated driving technologies in the suburbs of Delhi and in Mumbai.

    Due to numerous variations in the road systems and a large number of road users, India has a complex traffic environment where situations that occur frequently are difficult for AI to predict.

    “By conducting technology verification in such a technically challenging environment, Honda and IITs will refine the underlying technologies of CI and strive to apply them to future driver assistance and automated driving technologies in various regions of the world, including India,” the statement said.

    Honda said it has been actively hiring IIT graduates since 2019, and many of them are now playing key roles in the areas of mobility intelligence, including the research and development of CI.

  • Jio Phonecall AI: Users Can Record And Save Calls In Jio Cloud, Automatically Turn Them Into Text |

    Jio has introduced Jio Phonecall AI, a new service that brings AI capabilities right to your phone calls. With this tool, users can record and save calls in Jio Cloud, automatically turn them into text and even get summaries and translations in different languages. Akash Ambani explains that Jio Phonecall AI is designed to make managing and understanding phone conversations easier and more efficient.

    How to Use Jio Phonecall AI

    Using Jio Phonecall AI is simple. Here’s a simple step-by-step guide:

    – Dial the AI ​​Number: Call 1-800-1732673 to connect with Jio Phonecall AI.

    – Start Recording: After the welcome message, press 1 to begin recording.

    – Speak Freely: The AI ​​will automatically record, transcribe, and translate your conversation in the background.

    – Recording Alerts: The AI ​​will periodically announce, “The call is being recorded,” to ensure transparency.

    -Pause Transcription: Press 2 if you need to pause transcription. You’ll hear “Transcription is paused.”

    – Resume or End: Press 1 to resume transcription or press 3 to end the session.

    This feature supports various communication formats, including one-on-one calls, group conferences, and personal notes. Jio Phonecall AI is especially helpful for hearing-impaired individuals, promoting inclusivity and ensuring that everyone’s voice can be heard.

    All the call recordings and transcripts will be stored in Jio Cloud so user can easily retrieve them whenever they need. Jio Phonecall AI makes recording and transcribing calls as straightforward as dialing a number, making it an essential tool for smooth and effective communication in our connected world.

    In addition, Jio has also introduced Jio TvOS, a fully home-grown operating system for the Jio Set Top Box. Designed for a superior big-screen experience, Jio TvOS delivers faster, smoother performance with features like Ultra HD 4K video, Dolby Vision, and Dolby Atmos. According to Kiran Thomas, President of Reliance Jio, it provides a custom-made entertainment system for home use.

  • ‘AI Will Be Of More Value To Us Than We Imagined’: Anand Mahindra |

    New Delhi: Mahindra Group Chairman Anand Mahindra has said “Artificial intelligence (AI) will be of more value to us than we imagined”. He said this citing a research that showed AI can detect breast cancer five years before it develops.

    “If this is accurate, then AI is going to be of significantly more value to us than we imagined and much earlier than we had imagined…,” said Anand Mahindra in a post on X.com.

    If this is accurate, then AI is going to be of significantly more value to us than we imagined and much earlier than we had imagined… https://t.co/5Mo2cT7X7T — anand mahindra (@anandmahindra) July 28, 2024

    Several studies show the potential of AI in the early detection of cancers. Advanced technology is also paving the way for the development of new drugs to predict the treatment outcome and prognosis.

    Recently, a team of researchers from Duke University in the US developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms. Another study, published in the journal Radiology, showed AI algorithms outperformed the standard clinical risk model for predicting the five-year risk for breast cancer.

    Biopsy, histological examinations under microscopes, and imaging tests such as MRI, CT, and PET scans are traditional approaches to diagnosing cancer. While the interpretation of these tests is likely to vary among professionals, AI systems, especially those using deep learning techniques, can analyze medical images with staggering accuracy.

    It can also detect minute anomalies often missed by the human eye, reducing false negatives. It can also aid in early detection which can boost treatment outcomes. It can also boost the growth of personalized medicine.

    Vineet Nakra, a radiation oncologist at Max Super Specialty Hospital, told IANS that AI is helping pathologists diagnose cancer much faster and paving the way for doctors to make personalized and patient-centric cancer care.

  • Global PC Market Reach 62.5 Million PC Units Ahead Of Upcoming AI Boom | Markets News

    New Delhi: Global PC shipments grew 3.1 per cent in the second quarter this year to reach 62.5 million PC device units, the second consecutive quarterly growth in shipments (year-on-year), a report said on Tuesday.

    AI PCs remain the most popular topic in the PC industry. Laptops with Qualcomm’s Snapdragon X platform have begun to trigger the first wave of the AI laptop boom, according to Counterpoint Research.

    The upcoming AMD Ryzen AI 300 and Intel Lunar Lake in the second half of Q3 2024 will further drive the popularity of AI PCs.

    HP had a 21 per cent market share in Q2 2024, while Dell had 16 per cent. Apple remained quiet in the latest quarter with an 8.5 per cent market share. According to the report, AI laptops will expand the AI PC market in H2 2024, driven especially by surging Windows on Arm AI devices at the end of Q2 2024.

    “H2 2024 will be the battlefield for AI PCs. Vendors are paving the way for the proliferation of AI PCs and the rise of a brand-new technology generation,” said senior analyst William Li.
    “However, due to the higher selling price of AI PCs, we believe enterprise (commercial) demand will run ahead of the mainstream consumer sector, although we may not see the most rapid adoption path in the first year of AI PCs,” Li added.