Building a Hybrid Recommendation Engine using Python (2024)

In this post, we’ll explore how to build a hybrid recommendation engine using Python. We’ll integrate the strengths of matrix factorization and content-based filtering to create a more robust recommendation system.

Understanding the Basics

Before diving into the code, let’s understand the two core components of our hybrid system:

  • Matrix Factorization: This technique helps in uncovering latent features from user-item interactions. It’s particularly good at dealing with sparse datasets and providing personalized recommendations.
  • Content-Based Filtering: This method uses item features to recommend additional items similar to what the user likes, based on their previous actions.

Step 1: Creating a Synthetic Dataset

First, we need a dataset to work with. We’ll create a simple synthetic dataset representing user interactions with various coupons.

import pandas as pd# Sample user-item interactionsdata = { 'user_id': [1, 1, 2, 2, 3, 3, 4, 4, 5, 5], 'coupon_id': [101, 102, 101, 103, 102, 104, 101, 105, 103, 104], 'clicks': [1, 2, 1, 1, 0, 2, 1, 0, 2, 1], 'uses': [0, 1, 0, 1, 0, 1, 1, 0, 1, 0]}df = pd.DataFrame(data)

Step 2: Implementing Matrix Factorization

We use matrix factorization to uncover the underlying latent features in the user-item interaction data.

from scipy.sparse.linalg import svdsimport numpy as np# Creating a user-item matrixuser_item_matrix = df.pivot(index='user_id', columns='coupon_id', values='interaction').fillna(0)# Performing matrix factorizationU, sigma, Vt = svds(user_item_matrix, k=2) # k is the number of latent factorssigma = np.diag(sigma)

Step 3: Building the Content-Based Component

Next, we add features for each coupon and prepare our dataset for content-based filtering.

# Additional data for content-based filteringcoupon_features_data = { 'coupon_id': [101, 102, 103, 104, 105],'category': ['Medication', 'Wellness', 'Medication', 'Beauty', 'Wellness'],'discount_rate': [10, 15, 5, 20, 10]}coupon_features_df = pd.DataFrame(coupon_features_data)# One-hot encoding and normalizationcoupon_features_df = pd.get_dummies(coupon_features_df, columns=['category'])from sklearn.preprocessing import MinMaxScalerscaler = MinMaxScaler()coupon_features_df['discount_rate'] = scaler.fit_transform(coupon_features_df[['discount_rate']])

Step 4: Combining Both Approaches

We’ll now combine predictions from both matrix factorization and content-based filtering.

def hybrid_recommendation(user_id, num_recommendations): # Collaborative Filtering Predictions cf_predictions = recommend_coupons(user_id, num_recommendations) # Content-Based Predictions cb_predictions = predict_content_based(user_id, coupon_features_df, user_profiles) # Averaging the Scores hybrid_predictions = (cf_predictions + cb_predictions) / 2 # Filtering out already interacted items already_interacted = set(df[df['user_id'] == user_id]['coupon_id']) hybrid_predictions = hybrid_predictions[~hybrid_predictions.index.isin(already_interacted)] return hybrid_predictions.sort_values(ascending=False).head(num_recommendations)

Conclusion

This post has walked you through building a basic hybrid recommendation system using Python. We combined matrix factorization and content-based filtering to leverage the strengths of both methods. The hybrid approach ensures that our system can provide personalized recommendations while also handling new items effectively.

Remember, the code provided here is a simplified version of what a real-world recommendation system might look like. In practice, you would need to handle larger datasets, refine the model’s parameters, and continuously test and validate your system’s performance.

I hope this post provides a solid starting point for those interested in diving into the world of recommendation engines. Happy coding!

Model

We will leverage Google’s Vertex AI PaLM2 model for the chat model and provide this conversationally fine-tuned model with the right prompt to assist our doctor. A couple of things to note about this model are that its output is limited to 1024 tokens (source) and the use policy dictates that the end user must not use it to automate decisions in healthcare. Healthcare policy is still an evolving landscape and there is a line that can be navigated between augmenting practitioner intelligence versus fully automating the task.

Serve as an API

Set up Google Cloud Project

Set default project

gcloud config set project [PROJECT-ID]

Enable Vertex AI

gcloud services enable aiplatform.googleapis.com

Call PaLM LLM from localhost

gcloud auth application-default login

Set up LangChain

Install langchain CLI

pip install langchain-cli

The command below will create a production-ready API using LangServe. We will borrow the skeleton of the vertexai-chuck-norris package.

langchain app new my-demo --package vertexai-chuck-norris

The original chuck-norris package tells you a joke about Chuck Norris and anything else the user enters as a prompt. Instead of a joke, we will update the model to respond with relevant information to assist the doctor in diagnosing.

OLD:

_prompt = ChatPromptTemplate.from_template("Tell me a joke about Chuck Norris and : {text}")

NEW:

_prompt = ChatPromptTemplate.from_template("Tell me the DDx, ICD10, and CPT codes for the following chief complaint: {text}")

Local Testing

Change into the my-demo directory and deploy the app on localhost:

langchain serve

Building a Hybrid Recommendation Engine using Python (1)

Deployment

Now we go from local deployment to deploying our application as an HTTPS endpoint on Cloud Run, a serverless engine that runs our Fast API endpoint with autoscaling. Follow the on-screen instructions and pick a name for your application. For this instance, we chose the name gptdoc.

gcloud run deploy

To test your application in the cloud you can issue the following command to get a similar response to what we received when we did our local testing. Be sure to replace https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke with your endpoint and alter the “text” field to anything else the doctor might be curious about.

curl -X 'POST' \'https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke' \-H 'accept: application/json' \-H 'Content-Type: application/json' \-d '{"input": {"text": "stomach pain with blood in stool"},"config": {},"kwargs": {}}'

Building a Hybrid Recommendation Engine using Python (2)

We can also check our container on Cloud Run to see the HTTP 200 status to confirm a successful response was generated from our service.

Building a Hybrid Recommendation Engine using Python (3)

Next Steps

Create your own chain using langserve to easily deploy your use cases - chatbots, retrieval agents, etc. - in a scalable fashion.

The original article on Google Cloud is co-authored by the Founding Engineer of LangChain, a very popular framework that makes it seamless to build apps with large language models (LLMs). In this post we will explore the quickest way to deploy a doctor’s assistant that can answer questions about possible differential diagnoses, ICD10 and CPT codes that a doctor might need to consider given the patient is coming in with a particular set of chief complaints.

Model

We will leverage Google’s Vertex AI PaLM2 model for the chat model and provide this conversationally fine-tuned model with the right prompt to assist our doctor. A couple of things to note about this model are that its output is limited to 1024 tokens (source) and the use policy dictates that the end user must not use it to automate decisions in healthcare. Healthcare policy is still an evolving landscape and there is a line that can be navigated between augmenting practitioner intelligence versus fully automating the task.

Serve as an API

Set up Google Cloud Project

Set default project

gcloud config set project [PROJECT-ID]

Enable Vertex AI

gcloud services enable aiplatform.googleapis.com

Call PaLM LLM from localhost

gcloud auth application-default login

Set up LangChain

Install langchain CLI

pip install langchain-cli

The command below will create a production-ready API using LangServe. We will borrow the skeleton of the vertexai-chuck-norris package.

langchain app new my-demo --package vertexai-chuck-norris

The original chuck-norris package tells you a joke about Chuck Norris and anything else the user enters as a prompt. Instead of a joke, we will update the model to respond with relevant information to assist the doctor in diagnosing.

OLD:

_prompt = ChatPromptTemplate.from_template("Tell me a joke about Chuck Norris and : {text}")

NEW:

_prompt = ChatPromptTemplate.from_template("Tell me the DDx, ICD10, and CPT codes for the following chief complaint: {text}")

Local Testing

Change into the my-demo directory and deploy the app on localhost:

langchain serve

Building a Hybrid Recommendation Engine using Python (4)

Deployment

Now we go from local deployment to deploying our application as an HTTPS endpoint on Cloud Run, a serverless engine that runs our Fast API endpoint with autoscaling. Follow the on-screen instructions and pick a name for your application. For this instance, we chose the name gptdoc.

gcloud run deploy

To test your application in the cloud you can issue the following command to get a similar response to what we received when we did our local testing. Be sure to replace https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke with your endpoint and alter the “text” field to anything else the doctor might be curious about.

curl -X 'POST' \'https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke' \-H 'accept: application/json' \-H 'Content-Type: application/json' \-d '{"input": {"text": "stomach pain with blood in stool"},"config": {},"kwargs": {}}'

Building a Hybrid Recommendation Engine using Python (5)

We can also check our container on Cloud Run to see the HTTP 200 status to confirm a successful response was generated from our service.

Building a Hybrid Recommendation Engine using Python (6)

Next Steps

Create your own chain using langserve to easily deploy your use cases - chatbots, retrieval agents, etc. - in a scalable fashion.

The original article on Google Cloud is co-authored by the Founding Engineer of LangChain, a very popular framework that makes it seamless to build apps with large language models (LLMs). In this post we will explore the quickest way to deploy a doctor’s assistant that can answer questions about possible differential diagnoses, ICD10 and CPT codes that a doctor might need to consider given the patient is coming in with a particular set of chief complaints.

Model

We will leverage Google’s Vertex AI PaLM2 model for the chat model and provide this conversationally fine-tuned model with the right prompt to assist our doctor. A couple of things to note about this model are that its output is limited to 1024 tokens (source) and the use policy dictates that the end user must not use it to automate decisions in healthcare. Healthcare policy is still an evolving landscape and there is a line that can be navigated between augmenting practitioner intelligence versus fully automating the task.

Serve as an API

Set up Google Cloud Project

Set default project

gcloud config set project [PROJECT-ID]

Enable Vertex AI

gcloud services enable aiplatform.googleapis.com

Call PaLM LLM from localhost

gcloud auth application-default login

Set up LangChain

Install langchain CLI

The command below will create a production-ready API using LangServe. We will borrow the skeleton of the vertexai-chuck-norris package.

langchain app new my-demo --package vertexai-chuck-norris

The original chuck-norris package tells you a joke about Chuck Norris and anything else the user enters as a prompt. Instead of a joke, we will update the model to respond with relevant information to assist the doctor in diagnosing.

OLD:

_prompt = ChatPromptTemplate.from_template("Tell me a joke about Chuck Norris and : {text}")

NEW:

_prompt = ChatPromptTemplate.from_template("Tell me the DDx, ICD10, and CPT codes for the following chief complaint: {text}")

Local Testing

Change into the my-demo directory and deploy the app on localhost:

langchain serve

Building a Hybrid Recommendation Engine using Python (7)

Deployment

Now we go from local deployment to deploying our application as an HTTPS endpoint on Cloud Run, a serverless engine that runs our Fast API endpoint with autoscaling. Follow the on-screen instructions and pick a name for your application. For this instance, we chose the name gptdoc.

gcloud run deploy

To test your application in the cloud you can issue the following command to get a similar response to what we received when we did our local testing. Be sure to replace https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke with your endpoint and alter the “text” field to anything else the doctor might be curious about.

curl -X 'POST' \'https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke' \-H 'accept: application/json' \-H 'Content-Type: application/json' \-d '{"input": {"text": "stomach pain with blood in stool"},"config": {},"kwargs": {}}'

Building a Hybrid Recommendation Engine using Python (8)

We can also check our container on Cloud Run to see the HTTP 200 status to confirm a successful response was generated from our service.

Building a Hybrid Recommendation Engine using Python (9)

Next Steps

Create your own chain using langserve to easily deploy your use cases - chatbots, retrieval agents, etc. - in a scalable fashion.

The original article on Google Cloud is co-authored by the Founding Engineer of LangChain, a very popular framework that makes it seamless to build apps with large language models (LLMs). In this post we will explore the quickest way to deploy a doctor’s assistant that can answer questions about possible differential diagnoses, ICD10 and CPT codes that a doctor might need to consider given the patient is coming in with a particular set of chief complaints.

Model

We will leverage Google’s Vertex AI PaLM2 model for the chat model and provide this conversationally fine-tuned model with the right prompt to assist our doctor. A couple of things to note about this model are that its output is limited to 1024 tokens (source) and the use policy dictates that the end user must not use it to automate decisions in healthcare. Healthcare policy is still an evolving landscape and there is a line that can be navigated between augmenting practitioner intelligence versus fully automating the task.

Serve as an API

Set up Google Cloud Project

Set default project

gcloud config set project [PROJECT-ID]

Enable Vertex AI

gcloud services enable aiplatform.googleapis.com

Call PaLM LLM from localhost

gcloud auth application-default login

Set up LangChain

Install langchain CLI

pip install langchain-cli

The command below will create a production-ready API using LangServe. We will borrow the skeleton of the vertexai-chuck-norris package.

langchain app new my-demo --package vertexai-chuck-norris

The original chuck-norris package tells you a joke about Chuck Norris and anything else the user enters as a prompt. Instead of a joke, we will update the model to respond with relevant information to assist the doctor in diagnosing.

OLD:

_prompt = ChatPromptTemplate.from_template("Tell me a joke about Chuck Norris and : {text}")

NEW:

_prompt = ChatPromptTemplate.from_template("Tell me the DDx, ICD10, and CPT codes for the following chief complaint: {text}")

Local Testing

Change into the my-demo directory and deploy the app on localhost:

langchain serve

Building a Hybrid Recommendation Engine using Python (10)

Deployment

Now we go from local deployment to deploying our application as an HTTPS endpoint on Cloud Run, a serverless engine that runs our Fast API endpoint with autoscaling. Follow the on-screen instructions and pick a name for your application. For this instance, we chose the name gptdoc.

gcloud run deploy

To test your application in the cloud you can issue the following command to get a similar response to what we received when we did our local testing. Be sure to replace https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke with your endpoint and alter the “text” field to anything else the doctor might be curious about.

curl -X 'POST' \'https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke' \-H 'accept: application/json' \-H 'Content-Type: application/json' \-d '{"input": {"text": "stomach pain with blood in stool"},"config": {},"kwargs": {}}'

Building a Hybrid Recommendation Engine using Python (11)

We can also check our container on Cloud Run to see the HTTP 200 status to confirm a successful response was generated from our service.

Building a Hybrid Recommendation Engine using Python (12)

Next Steps

Create your own chain using langserve to easily deploy your use cases - chatbots, retrieval agents, etc. - in a scalable fashion.

The original article on Google Cloud is co-authored by the Founding Engineer of LangChain, a very popular framework that makes it seamless to build apps with large language models (LLMs). In this post we will explore the quickest way to deploy a doctor’s assistant that can answer questions about possible differential diagnoses, ICD10 and CPT codes that a doctor might need to consider given the patient is coming in with a particular set of chief complaints.

Model

We will leverage Google’s Vertex AI PaLM2 model for the chat model and provide this conversationally fine-tuned model with the right prompt to assist our doctor. A couple of things to note about this model are that its output is limited to 1024 tokens (source) and the use policy dictates that the end user must not use it to automate decisions in healthcare. Healthcare policy is still an evolving landscape and there is a line that can be navigated between augmenting practitioner intelligence versus fully automating the task.

Serve as an API

Set up Google Cloud Project

Set default project

gcloud config set project [PROJECT-ID]

Enable Vertex AI

gcloud services enable aiplatform.googleapis.com

Call PaLM LLM from localhost

gcloud auth application-default login

Set up LangChain

Install langchain CLI

pip install langchain-cli

The command below will create a production-ready API using LangServe. We will borrow the skeleton of the vertexai-chuck-norris package.

langchain app new my-demo --package vertexai-chuck-norris

The original chuck-norris package tells you a joke about Chuck Norris and anything else the user enters as a prompt. Instead of a joke, we will update the model to respond with relevant information to assist the doctor in diagnosing.

OLD:

_prompt = ChatPromptTemplate.from_template("Tell me a joke about Chuck Norris and : {text}")

NEW:

_prompt = ChatPromptTemplate.from_template("Tell me the DDx, ICD10, and CPT codes for the following chief complaint: {text}")

Local Testing

Change into the my-demo directory and deploy the app on localhost:

langchain serve

Building a Hybrid Recommendation Engine using Python (13)

Deployment

Now we go from local deployment to deploying our application as an HTTPS endpoint on Cloud Run, a serverless engine that runs our Fast API endpoint with autoscaling. Follow the on-screen instructions and pick a name for your application. For this instance, we chose the name gptdoc.

gcloud run deploy

To test your application in the cloud you can issue the following command to get a similar response to what we received when we did our local testing. Be sure to replace https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke with your endpoint and alter the “text” field to anything else the doctor might be curious about.

curl -X 'POST' \'https://gptdoc-fbpcgd5sua-uw.a.run.app/vertexai-ddx-icd10-cpt/invoke' \-H 'accept: application/json' \-H 'Content-Type: application/json' \-d '{"input": {"text": "stomach pain with blood in stool"},"config": {},"kwargs": {}}'

Building a Hybrid Recommendation Engine using Python (14)

We can also check our container on Cloud Run to see the HTTP 200 status to confirm a successful response was generated from our service.

Building a Hybrid Recommendation Engine using Python (15)

Next Steps

Create your own chain using langserve to easily deploy your use cases - chatbots, retrieval agents, etc. - in a scalable fashion.

Full code: Github

Building a Hybrid Recommendation Engine using Python (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Golda Nolan II

Last Updated:

Views: 6032

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.