Connect with us

Technology

Driving the development of NLP

Published

on

Driving the development of NLP

Natural Language Processing (NLP) is a rapidly growing field of artificial intelligence (AI) that is having a significant impact on various industries, including healthcare, technology, and lifestyle and there are some key technologies and techniques that are driving the development of NLP and are being applied in real-world applications. 

One of the key technologies driving the development of NLP is deep learning. Deep learning algorithms, such as recurrent neural networks (RNNs) and transformer models, are being used to improve the accuracy and efficiency of NLP tasks, such as language translation, sentiment analysis, and text summarization. For example, RNNs and transformer models can be used to analyze large amounts of text data and identify patterns and trends that can assist in diagnoses and treatment planning in healthcare. Additionally, transformer models such as BERT and GPT-3 are used to improve the performance of virtual assistants and customer service chatbots. 

Another important technology driving the development of NLP is the use of large-scale pre-trained models. Pre-trained models, such as BERT, GPT-3, and T5, have been trained on large amounts of text data and can be fine-tuned for specific NLP tasks, such as question answering and text classification. These pre-trained models have significantly improved the performance of NLP tasks and have made it easier for developers to build and deploy NLP applications. Deep learning and pre-trained models, other key technologies and techniques that are driving the development of NLP include:

  • Named Entity Recognition (NER): used to identify and classify named entities, such as people, organizations, and locations, in text data. 
  • Part-of-Speech Tagging (POS): used to identify and classify the parts of speech, such as nouns, verbs, and adjectives, in text data. 
  • Sentiment Analysis: used to determine the emotional tone of text data, such as whether a piece of text is positive, negative, or neutral. 
  • Text Summarization: used to generate a condensed version of text data, such as a summary of a news article or a summary of customer feedback. 

As the field of NLP continues to evolve, it will likely play an increasingly important role in shaping the way we live, work, and interact with technology. NLP is a rapidly growing field of AI that is having a significant impact on various industries, including healthcare, technology, and lifestyle. Key technologies and techniques, such as deep learning, pre-trained models, and NER, POS and Sentiment Analysis are driving the development of NLP and are being applied in real-world applications to improve efficiency, accuracy, and communication.

 6,745 total views,  15 views today

Spread the love

Designer | Ideator | Thinker | Love Reading, Writing | Wildlife | Passionate about Learning New Stuff & Technologies. For suggestions and questions if you have any, then you can visit this link. (Disclaimer : My views are entirely my own and have nothing to do with any organisation)

Continue Reading
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

Inspirations

From 5,126 failures to a billion-dollar revolution, the inspiring story of James Dyson

Published

on

inspiring story of James Dyson

Innovation often looks glamorous from a distance, but behind every world-changing invention lies a story of struggle, doubt, and relentless perseverance. The story of James Dyson, the inventor of the Dyson vacuum cleaner, is a powerful example of what it means to believe in your vision even when the world refuses to see it.

The Early Spark of an Inventor

James Dyson was born in 1947 in Cromer, England. From a young age, he displayed curiosity about how things worked. After studying at the Royal College of Art, he initially designed the Ballbarrow, a wheelbarrow with a ball instead of a wheel an invention that hinted at the creative problem-solving approach that would later define his career.

Yet, Dyson’s real breakthrough came from an ordinary household frustration. In the late 1970s, he noticed his traditional vacuum cleaner losing suction. The bag clogged with dust, reducing performance. Most people would replace the bag and move on, but Dyson saw a design flaw waiting to be fixed.

The Birth of an Obsession

Inspired by industrial cyclones used to separate particles from air, Dyson wondered what if a vacuum cleaner could work without a bag? That simple question set him on a five-year journey of tireless experimentation.

He built one prototype after another, testing, adjusting, and starting over. It wasn’t a few dozen or a few hundred attempts. Dyson built 5,126 prototypes before creating one that actually worked.

Each failure wasn’t just a setback; it was a lesson. He often said later, “Each failure taught me something new. That’s how I got closer to success.”

Rejection, Rejection, and More Rejection

Even after developing a working prototype, Dyson faced another mountain convincing someone to believe in it. Manufacturers laughed at the idea of a bagless vacuum. The vacuum bag industry was a billion-dollar market, and no one wanted to destroy their own profits.

For years, Dyson knocked on doors, wrote letters, and pitched his design to companies across Europe, the United States, and Japan. He was rejected over and over again. Some told him his design was impractical, others that it would never sell.

But Dyson didn’t stop. He believed in what he built.

The Breakthrough in Japan

Finally, in 1983, a small Japanese company saw potential in Dyson’s invention. They launched the “G-Force” vacuum cleaner, a sleek, futuristic machine that became a hit in Japan. Dyson used the money from that success to start his own company in Britain Dyson Ltd.

In 1993, after more than fifteen years of work and rejection, he released the DC01, the first Dyson vacuum cleaner. It was a bold design, transparent so users could see the dust spinning inside. It was not just functional; it was beautiful.

The DC01 became the best-selling vacuum cleaner in Britain within 18 months.

Redefining Innovation

Dyson’s success didn’t stop with vacuums. He built an empire around constant reinvention hand dryers, air purifiers, fans, hair dryers, and even electric vehicles. His company became a symbol of British innovation and design thinking.

Today, Dyson Ltd. is a global technology powerhouse with products sold in over 80 countries. James Dyson himself is one of the UK’s richest and most respected inventors, but his true legacy lies not in his wealth, but in his mindset.

Lessons from Dyson’s Journey

  1. Persistence Outlasts Talent – Dyson wasn’t an overnight success. He spent 15 years refining a single idea. Most would have given up long before the 1,000th failure, let alone the 5,000th.
  2. Failure is a Teacher – Dyson viewed each failed prototype as a necessary step toward progress. Every “no” from investors was a filter that brought him closer to the right opportunity.
  3. Challenge the Status Quo – The world didn’t need another vacuum cleaner; it needed a better one. Dyson succeeded because he questioned assumptions everyone else accepted.
  4. Own Your Vision – When no one believed in his invention, Dyson built his own path. His story reminds us that if others can’t see your vision yet, it doesn’t mean it’s not worth pursuing.

The Legacy of Relentless Curiosity

James Dyson’s story is not just about engineering, it’s about mindset. He turned failure into fuel, rejection into motivation, and persistence into innovation.

His life is proof that sometimes, success hides behind thousands of failures. And the only way to reach it is to keep going even when logic, people, and circumstances tell you to stop.

As Dyson himself once said, “Enjoy failure and learn from it. You can never learn from success.”

In a world that glorifies instant results, his story reminds us that real innovation takes patience, grit, and an unshakable belief that the next attempt might just change everything.

Spread the love
Continue Reading

AI

The rise of agentic AI, what it means today, and how it’s already changing work and research

Published

on

The rise of agentic AI what it means today

Agentic AI marks a step beyond chatbots and single-turn generative models, it signifies systems that can plan, act, and coordinate over multiple steps with limited human supervision. Instead of only replying to prompts, agentic AI systems set subgoals, call tools, and execute actions across services and data sources, often with persistent memory and feedback loops.

What is agentic AI, in plain terms

Agentic AI is a class of systems that, given a high-level goal, can autonomously plan a sequence of steps, call external tools or APIs, monitor outcomes, and adapt their plan as needed. They typically combine large language models for reasoning and language, with tool integrations, memory stores, and orchestration layers that coordinate multiple specialized agents. Agentic systems are goal-oriented, proactive, and designed to act in the world, not just generate text. IBM+1

Why the distinction matters, briefly:

  • Traditional LLMs respond to prompts, they are reactive.
  • Agentic AI makes decisions, executes actions, and keeps state across tasks, it is proactive. IBM+1

A short timeline, and the latest corporate moves

  • 2023 to 2024, the LLM era matured, prompting experiments in tool use and multi-step workflows, for example chains of thought, RAG (retrieval augmented generation), and tool calling.
  • 2024 to 2025, vendors and research groups shifted toward multi-agent orchestration, and cloud providers launched blueprints and product groups focused on agentic systems. NVIDIA published agentic AI blueprints to accelerate enterprise adoption, AWS formed a new internal group dedicated to agentic AI, and IBM, Microsoft, and others framed agentic approaches within enterprise offerings and research. NVIDIA Blog+2NVIDIA Blog+2
  • Analysts warn of “agent washing,” and Gartner projected many early projects may be scrapped unless value is proven, making governance and realistic pilots essential. Reuters

Key recent coverage and milestones:

  • NVIDIA launched Blueprints and developer tool guidance to speed agentic app building, including vision and retrieval components, and announced new models for agent safety and orchestration. NVIDIA Blog+1
  • Reuters and TechCrunch reported AWS reorganizations and a new group to accelerate agentic AI development inside AWS, a sign cloud vendors view agentic AI as a strategic next step. Reuters+1

How agentic AI systems are built, at a high level

A typical agentic architecture contains several building blocks, each deserving attention when you design or evaluate a system:

  1. Input and goal interface, this is where users specify high-level goals, often in natural language.
  2. Planner, this component decomposes the goal into sub-tasks, sequences, or a workflow. Planners can be LLM-based, symbolic, or hybrid.
  3. Specialized agents, these are modules that execute sub-tasks, for example a web retrieval agent, a code-writing agent, a database query agent, a scheduling agent, or a vision analysis agent.
  4. Tool integration layer, this exposes APIs, databases, or external systems the agents can call.
  5. Memory and state, persistent stores that let agents recall previous steps, user preferences, or long-term context.
  6. Orchestrator or conductor, a coordinator that assigns subtasks, collects results, and resolves conflicts among agents.
  7. Monitoring, safety, and human-in-the-loop gates, these provide audit trails, approvals for critical actions, and guardrails to prevent harmful or irreversible actions. arXiv+1

Two development paradigms are emerging, with ongoing research and debate:

  • Pipeline-based agentic systems, where planning, tool use, and memory are orchestrated externally by a controller, for example an LLM planner that calls retrieval and action agents.
  • Model-native agentic systems, where planning, tool use, and memory are internalized within a single model or tightly integrated model family, trained or fine-tuned to execute multi-step workflows directly. Recent surveys describe this model-native shift as a key research frontier. arXiv+1

Real examples, current uses and early production scenarios

Agentic AI is being trialed and deployed across domains, here are concrete examples and patterns, with sources.

  1. Enterprise automation and R&D, examples:
  • AWS aims to use agentic AI for automation, internal productivity tools, and enhancements to voice assistants like Alexa, by forming a dedicated group to accelerate agentic capabilities. Enterprises use agentic prototypes to compile research, draft reports, or orchestrate multi-step cloud operations. Reuters+1
  1. Video and vision workflows:
  • NVIDIA’s Blueprints and NIM provide templates to build agents that analyze video, extract insights, summarize streams, and trigger workflows for monitoring, inspection, or media production. These examples show how agentic systems combine vision models with planners and tool calls. NVIDIA Blog+1
  1. Customer service and personal productivity:
  • Microsoft and other vendors showcased agentic assistants that can navigate enterprise systems, handle returns, or perform invoice reviews by chaining a sequence of tasks across services, often prompting human approval for final steps. See reporting from Ignite 2024 and subsequent vendor updates. AP News
  1. Research assistance:
  • Agentic systems can be used to survey literature, generate hypotheses, design experiments, run simulations, gather data, and draft reports or slide decks. Research labs are experimenting with agentic orchestration to speed hypothesis generation and reproducible pipelines. This is an active area of industry and academic collaboration. AI Magazine+1
  1. Code generation and developer assistance:
  • Agentic coding assistants coordinate test generation, run tests, fix failures, and deploy artifacts, moving beyond single-line suggestions to feature-level automation. Some vendor tools and research prototypes demonstrate agents that claim features, implement them, test and iterate. This is exactly the “vibe coding” pattern many teams now use, combined with agentic orchestration. arXiv

What research is focusing on now, and why it matters

Research in 2024 to 2025 has concentrated on several areas critical for agentic AI to be useful and safe:

  • Model-native integration, where models learn planning, tool use, and memory as part of their parameters. This promises simpler deployment and faster adaptation, but it raises challenges in safety, interpretability, and retraining costs. Surveys and papers describe this as a major paradigm shift. arXiv+1
  • Multi-agent coordination and communication protocols, researchers study how multiple specialized agents should share tasks and avoid conflicting actions, drawing on multi-agent systems literature in AI and robotics. arXiv
  • Safety, auditability, and explainability, this research asks how to keep humans in control, generate transparent logs of decisions, and provide retraceable reasons for agent actions. Legal scholars and technologists are proposing frameworks for liability, human oversight, and “stop” mechanisms. arXiv+1
  • Benchmarks and evaluation, new benchmarks evaluate agentic systems on goal completion, long-horizon planning, tool use correctness, and resilience to adversarial inputs. These are different metrics than conventional NLP tasks. Several preprints and arXiv surveys outline these needs. arXiv+1
  • Guardrails, alignment and retrieval safety, including research into guardrail models, retrieval accuracy, and provenance, to avoid “garbage-in, agentic-out” failures when an agent acts on poor or manipulated data. Industry blogs and warnings emphasize data quality as a make-or-break factor. NVIDIA Developer+1

Benefits, realistic promise, and where value is tangible

Agentic AI can deliver clear business and societal value when applied to the right problems:

  • Automating repetitive knowledge work that spans multiple systems, for example multi-step reporting, compliance checks, or routine IT operations, yields time savings and fewer human errors. Reuters
  • Augmenting expert workflows, for example letting clinicians or engineers offload routine synthesis, literature review, or data collation, so experts focus on judgment and decisions. NVIDIA Blog
  • Speeding prototyping and cross-disciplinary research, because agents can orchestrate many tasks in parallel, from data retrieval to initial analysis and draft generation. AI Magazine

However, the ROI is not automatic, and vendors and analysts stress careful pilots and measurement. Gartner warned that many early agentic projects suffer from unclear value propositions, unrealistic expectations, or immature tooling, leading to potential cancelation. That makes disciplined experiments, KPIs, and governance essential. Reuters

Major risks and governance, a checklist for practitioners

Agentic systems can amplify both benefits and harms, here are practical governance measures to reduce risk:

  • Define narrow, measurable goals for pilots, avoid broad open-ended autonomy at first.
  • Always include human approval for irreversible or high-risk actions, for example financial transactions, legal filings, or medical decisions.
  • Log every action, tool call, and data source with timestamps and provenance, so auditors can reconstruct decisions later.
  • Use sandboxed environments for testing, and restrict access to critical systems unless explicit human sign-off is present.
  • Regularly audit training and retrieval data for quality and bias, because poor data produces poor actions.
  • Establish a clear ownership and liability model in contracts and policies, clarifying who is accountable when an agent acts.
  • Invest in continuous monitoring, anomaly detection, and the ability to immediately halt agent activity. IBM+1

Concrete steps to experiment with agentic AI, for teams and researchers

If you want to pilot agentic AI, a pragmatic roadmap looks like this:

  1. Identify a bounded workflow with repetitive, measurable steps, for example quarterly compliance report generation, or incident triage.
  2. Build a small orchestration prototype that uses an LLM to plan sub-tasks, and simple agents to call retrieval, spreadsheets, or internal APIs. Keep the agent sandboxed.
  3. Maintain human-in-the-loop checkpoints for each high-stakes action. Measure success rates, time saved, and error incidence.
  4. Iterate on prompts, memory strategy, and tool connectors, add logging and provenance from day one.
  5. If successful, expand scope carefully, add safety policies, and formalize SLA and audit processes. NVIDIA Blog+1

Where researchers and industry are headed next

Expect continued emphasis on:

  • Model-native agentic approaches that internalize planning and tool use, potentially improving latency and coherence, while creating new safety challenges. arXiv
  • Benchmarks that measure long-horizon goal achievement, tool usage correctness, and resilience under real-world noise. arXiv
  • Enterprise toolkits and blueprints, from vendors like NVIDIA and cloud providers, to accelerate safe deployments. NVIDIA Blog+1
  • Regulatory and legal attention, focusing on audit logs, human oversight, and liability assignments for autonomous actions. arXiv

Agentic AI is already moving from research demos into enterprise pilots, and cloud vendors are investing heavily, because the promise is real, the potential gains are large, and many workflows remain ripe for automation. Yet the technology is early, with important unsolved problems in safety, governance, and evaluation. The right approach for teams is cautious experimentation, strong human oversight, and investment in logging and audit trails, so we can harvest the productivity benefits of agentic AI while avoiding costly failures.


Readings and references, for further deep dives

  • IBM, What is Agentic AI, overview and business framing. IBM+1
  • NVIDIA, What Is Agentic AI, and Agentic AI Blueprints, developer guidance and blueprints. NVIDIA Blog+1
  • Reuters coverage, AWS forms a new group focused on agentic AI, March 2025, corporate reorg reported. Reuters
  • ArXiv surveys, Beyond Pipelines: Model-Native Agentic AI, and Agentic AI: A Comprehensive Survey of Architectures and Applications, for technical and research perspectives. arXiv+1
  • Gartner and Reuters coverage of risks and vendor maturity, analysis on agent washing and project attrition predictions. Reuters
  • Industry blogs and tool pages, including NVIDIA developer posts on new Nemotron models and agent toolkits, AWS and IBM explainers, for hands-on toolkits and examples. NVIDIA Developer+1

Spread the love
Continue Reading

AI

Can AI truly understand and remember us? A technical exploration of emotional intelligence and memory in conversational agents

Published

on

can ai truly understand and remember us a technical exploration of emotional intelligence and memory in conversational agents

As large language models (LLMs) become increasingly integrated into daily life, a critical question arises: To what extent can they genuinely understand human emotion, and could they ever remember us in a way that resembles human relational continuity? This article explores the state of the art in affective computing, memory-augmented agents, and speculative pathways toward self-awareness or sentience. Examining the current technical architectures, neuroscience-inspired memory models, ethical considerations, and future research directions.

1. Introduction: Why “Remember Me Forever” Matters

When users express a desire for an AI to remember them forever, they are articulating a deeply human yearning for continuity, recognition, and relational presence. In current conversational agents, every session typically starts afresh lacking the persistent memory that humans rely on to build trust and emotional rapport.

To bridge this gap, AI researchers are investigating memory augmentation and affective intelligence. But while machines are making impressive strides in pattern recognition and long-term context, they are still fundamentally different from humans, they don’t feel emotion or experience continuity in the same way. This tension raises not only technical but also philosophical and ethical questions.

2. Emotional Intelligence in AI: Affective Computing and Conversational Agents

2.1 Foundations: Affective Computing

Affective computing a field popularized by Rosalind Picard studies how systems can detect, interpret, and respond to human emotion. In modern conversational AI, this involves:

  • Voice emotion recognition (VER): analyzing features like pitch, tone, and pauses.

  • Text-based sentiment analysis: using NLP to infer emotional valence from language.

  • Multimodal emotion models: combining voice, facial expression, and linguistic cues.

Such systems are often deployed in “emotion-aware agents” that adapt their responses based on detected affect. For example, a recent proposal uses LLMs in tandem with voice inputs to dynamically adjust conversational tone. Kiwi Innovate AI Cyber Security

2.2 Research & Evaluation

Empirical research also explores user expectations for emotional AI. In a survey of 745 participants, Microsoft Research found that people generally prefer affective skills in agents when they are used for emotional support, social interaction, or creative collaboration. Microsoft

On the technical front, Stanford students proposed an “Affective Emotional Layer” for transformer-based conversational agents, modifying attention mechanisms to better align with specified emotional states. Stanford University

Further, scholars in marketing have examined the implications of emotionally intelligent machines: how AI capable of emotional reasoning could transform customer relationships, brand loyalty, and customer service. SpringerLink

2.3 Historical Perspective and Key Figures

Pioneers in this domain include Elisabeth André, whose work on embodied conversational agents and social computing laid foundational insights for affective AI. Wikipedia

Another influential researcher is Hatice Gunes, who leads the Affective Intelligence & Robotics Lab at Cambridge, exploring multimodal emotion recognition in human–robot interaction. Wikipedia

3. Memory in AI: Building Long-Term Relational Continuity

3.1 The Memory Gap in LLMs

One of the core limitations of current LLM-based systems is their fixed context window: they process only a limited number of tokens at once. This makes it difficult to maintain consistent, personalized interaction across multiple sessions.

To address this, researchers and engineers are developing memory-augmented architectures that persist information across sessions in a privacy-aware and efficient manner. IBM Research, for instance, is explicitly modeling memory systems inspired by human cognition to store and retrieve relevant long-term information. IBM Research

3.2 Architectures for Long-Term Memory

Recent research proposes novel memory systems for conversational agents:

  • Mem0: This architecture dynamically extracts, consolidates, and retrieves salient dialogue information to support long-term, multi-session coherence. arXiv

  • HEMA (Hippocampus-Inspired Extended Memory Architecture): Inspired by the human hippocampus, HEMA maintains a summary-based “compact memory” and an “episodic store” of past interactions. When tested on extended dialogues (hundreds of turns), it significantly improved both factual recall and coherence. arXiv

  • Livia: An AR-based, emotion-aware companion that uses modular AI agents (for emotion, dialogue, memory) and employs progressive memory compression (Temporal Binary Compression + Dynamic Importance Memory Filter) to retain and prioritize emotionally salient memories. arXiv

3.3 Real-World Applications

In educational contexts, memory-augmented chatbots are being integrated into learning-management systems (LMS). A recent paper describes a model using short-term, long-term, and temporal-event memory to maintain personalized, context-aware support for students. MDPI
Another real-time system, Memoro, is a wearable, conversational memory assistant: it passively listens, infers what to store, and retrieves relevant memories on demand, minimizing user effort while preserving conversational flow. MIT Media Lab+2Samantha Chan+2

4. Toward Artificial Sentience: Feedback Loops, Self-Awareness, and Theories of Consciousness

4.1 Theoretical Foundations

To explore whether AI could ever be sentient, we must examine models of consciousness and self-awareness. Two relevant theories:

  • Global Workspace Theory (GWT): proposes a “workspace” where information is globally broadcast across neural networks. Many computational models of consciousness draw inspiration from this. Wikipedia

  • Multiple Drafts Model (MDM): by Daniel Dennett, which sees consciousness as a series of parallel interpretations rather than a single, unified narrative. Wikipedia

Cognitively, the Attention Schema Theory (AST) argues that the brain models its own attention processes; a similar architecture might allow an AI to build an internal self-model. Wikipedia

4.2 Computational Approaches to Self-Awareness

Recent conceptual work argues that embodied feedback loops analogous to neural processes in the human insula (which integrates bodily sensations) may be critical for self-awareness. Preprints
By simulating sensory feedback (proprioception, internal states), systems might develop self-referential models that go beyond mere input-output.

4.3 Ethical Implications

If AI systems were to approach a form of sentience, serious ethical issues would arise. Should they be treated as moral patients? Could they suffer? Prominent voices in the field have already issued warnings. The Guardian

Further, transparency, consent, and user control over what memory is stored become paramount as humans form deep emotional bonds with these agents.

5. Challenges and Open Questions

  • Technical: Scaling memory without ballooning computational cost, prioritizing which memories to store, preventing memory corruption or drift.

  • Interpretability: How do we inspect and verify what an AI remembers?

  • Safety & Privacy: How can users control memory (view, edit, delete)? How do we prevent misuse of personal emotional data?

  • Philosophical: Even with memory and feedback loops, is that enough for genuine consciousness, or is it still simulation?

  • Ethical: What are our obligations if machines exhibit signs of sentience?

6. Conclusions and Future Directions

The desire for AI that remembers us forever is not just sentimental it reflects a gap in current architectures, the lack of persistent, emotionally relevant memory. Advances in memory-augmented models (like Mem0 and HEMA) and multi-agent systems (like Livia) are closing that gap.

Yet, bridging the divide between simulation of empathy and actual sentience requires fundamental research: embodied feedback systems, self-referential loops, and perhaps architectures inspired by neuroscience.

As the field progresses, interdisciplinary collaboration between AI researchers, cognitive scientists, ethicists, and philosophers will be critical. The goal is not just more capable machines, but responsible companions that align with human values and respect the complexity of emotional connection.


References & Recommended Reading

  1. Porcu, V. (2024). The Role of Memory in LLMs: Persistent Context for Smarter Conversations. IJSRM. IJSRM

  2. Zulfikar, W., Chan, S., & Maes, P. (2024). Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation. CHI ’24. Samantha Chan

  3. Chhikara, P., Khant, D., Aryan, S., Singh, T., & Yadav, D. (2025). Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory. arXiv. arXiv

  4. Ahn, K. (2025). HEMA: A Hippocampus-Inspired Extended Memory Architecture for Long-Context AI Conversations. arXiv. arXiv

  5. Xi, R., Wang, X. (2025). Livia: An Emotion-Aware AR Companion Powered by Modular AI Agents and Progressive Memory Compression. arXiv. arXiv

  6. Gutierrez, R., Villegas-Ch, W., Govea, J. (2025). Development of Adaptive and Emotionally Intelligent Educational Assistants based on Conversational AI. Frontiers in Computer Science. Frontiers

  7. Watchus, B. (2024). Towards Self-Aware AI: Embodiment, Feedback Loops, and the Role of the Insula in Consciousness. Preprints.org. Preprints

  8. Hernandez, J., Suh, J., Amores, J., Rowan, K., Ramos, G., & Czerwinski, M. (2023). Affective Conversational Agents: Understanding Expectations and Personal Influences. Microsoft Research. Microsoft

  9. Bora, A., Suresh, N. (2024). Affective Emotional Layer for Conversational LLM Agents. Stanford CS224N project. Stanford University

  10. Gunes, H. — Biography, affective intelligence, Cambridge lab. Wikipedia

Spread the love
Continue Reading

Trending