Quantcast
Channel: Joab Jackson, Author at The New Stack
Viewing all articles
Browse latest Browse all 697

Couchbase Adds Vector for Full Hybrid Search Capabilities

$
0
0

The Couchbase line of database systems and database services will soon have vector processing capabilities, giving them the ability to offer not only support for large language models (LLMs) but for full hybrid search capabilities as well.

The new feature, which will be available in Couchbase Capella Database as a Service (DBaaS) and the Couchbase Server 7.6, could help organizations integrate AI-based search capabilities to their existing applications, an approach called “hybrid search.”

The Power of Vector Processing

Many new vector databases, or vector capabilities in standard databases, have been recently introduced in the wave of interest in personalized Generative AI applications.

Couchbase’s new offering is unique, the company claims, in that vector search can be run across different platforms, either on-site, across clouds, and on mobile and embedded devices. Couchbase itself is a NoSQL JSON-oriented database, but it also has a synchronization mechanism to move documents to a mobile or an Internet of Things device, through a Couchbase Lite datastore.

“This vector capability will be available as part of our overall mobile solution so that you can do a vectorized index on Couchbase Lite, and be able to query that as part of an application running on the device itself,” said Scott Anderson, senior vice president of product management and business operations at Couchbase, in an interview with The New Stack.

A repair person trying to fix a device would find this capability handy, Anderson explained. The person can take a photo of the problematic component, and then a mobile application can compare that image to an index of images stored locally, in order to identify the issue at hand.

Couchbase’s vector search works with another new feature, a columnar service, which is also designed to cut search response times. A single index can be used for all search patterns, and Retrieval-Augmented Generation (RAG) can easily be added to improve LLM accuracy. To further the use of AI LLMs, Couchbase has also partnered with LangChain and LlamaIndex, offering APIs and other points of integration.

The Power of Hybrid Search

Despite the hype, vector search alone won’t offer the full capabilities needed by today’s applications, Anderson said. This is also where Couchbase’s unique traits come into play.

Anderson offered an example of how hybrid search could be used, and how it couldn’t be replicated with only a standard relational database system.

Say a customer is looking for a pair of shoes in a particular shade of blue. They can upload an image of that particular color of blue, and a vector search across an LLM of different colors can identify the exact shade.

But the query will come with other requirements such as brand name (“Nike”), size (“11”), customer rating thresholds (“four stars or above”), a price range and and availability within 50 miles of the user.

“So if you think about that statement, there’s a combination of access patterns. It’s not just a vectorized search, a range search, a keyword search, geospatial, and a query for local store inventory,” Anderson said.

For the lowest latency possible, a single search service should be used to gather all these results, Anderson argues.

“So that ability to combine those access all those different access mechanisms into a single SQL statement from the application is going to give much more precise results.”

To learn more, check out the company’s Spring Release webcast on March 19-20.

The post Couchbase Adds Vector for Full Hybrid Search Capabilities appeared first on The New Stack.

The new feature, which will be available in Couchbase Capella Database as a Service and the Couchbase Server 7.6 could help organizations introduce AI-based capabilities to their existing search applications.

Viewing all articles
Browse latest Browse all 697

Trending Articles