ChatGPT Archives - The Official Blog of Adam DiStefano, M.S., CISSP https://cybersecninja.com/category/chatgpt/ All things artificial intelligence and cyber security Thu, 27 Apr 2023 02:44:32 +0000 en-US hourly 1 https://cybersecninja.com/wp-content/uploads/2023/04/cropped-favicon-32x32.png ChatGPT Archives - The Official Blog of Adam DiStefano, M.S., CISSP https://cybersecninja.com/category/chatgpt/ 32 32 Risks of Chatbot Adoption: Protecting AI Language Models from Data Leakage, Poisoning, and Attacks https://cybersecninja.com/risks-of-chatbot-adoption-protecting-ai-language-models-from-data-leakage-poisoning-and-attacks/ https://cybersecninja.com/risks-of-chatbot-adoption-protecting-ai-language-models-from-data-leakage-poisoning-and-attacks/#respond Thu, 27 Apr 2023 02:20:00 +0000 https://cybersecninja.com/?p=149 Artificial Intelligence is going to revolutionize the world. We are already seeing the adoption of chatbots. These can often enhance the way businesses deliver value to both their internal processes and to their customers. However, it is important we understand that the adoption of these tools do not come without new risks. In this blog...

The post Risks of Chatbot Adoption: Protecting AI Language Models from Data Leakage, Poisoning, and Attacks appeared first on The Official Blog of Adam DiStefano, M.S., CISSP.

]]>
Artificial Intelligence is going to revolutionize the world. We are already seeing the adoption of chatbots. These can often enhance the way businesses deliver value to both their internal processes and to their customers. However, it is important we understand that the adoption of these tools do not come without new risks. In this blog post, we will discuss some of the biggest risks businesses face with adopting tools like chatbots.

Risk 1: Data Leakage and Privacy Concerns

Natural language models are pre-trained on vast amounts of data from various sources, including websites, articles, and user-generated content. Sensitive information, when inadvertently embedded, often leads to data leakage or privacy concerns when the model generates text based on this information.

Data leakage occurs when unauthorized exposure or access of sensitive or confidential data happens during the process of training or deploying machine learning models. This can happen due to various reasons such as a lack of proper security measures, errors in coding, or intentional malicious activity. Additionally, data leakage can compromise the privacy and security of the data, leading to potential legal and financial implications for businesses. It can also lead to biased or inaccurate AI models, as the leaked data may contain information that is not representative of the larger population.

Data Leakage in the Wild

In late March of 2023, ChatGPT alerted users of an identified flaw that enabled other users to view portions of conversations users had with the chatbot. OpenAi confirmed that a vulnerability in their redis-py open-source library was the cause data leak and subsequently, “During a nine-hour window on March 20, 2023, another ChatGPT user may have inadvertently seen your billing information when clicking on their own ‘Manage Subscription’ page,” according to an article posted on HelpNetSecurity. The article went on to say that OpenAi uses “Redis to cache user information in their server, Redis Cluster to distribute this load over multiple Redis instances, and the redis-py library to interface with Redis from their Python server, which runs with Asyncio.”

Earlier this month, three incidents of data leakage occurred at Samsung as a result of using ChatGPT. Dark Reading reported that “the first incident as involving an engineer who passed buggy source code from a semiconductor database into ChatGPT, with a prompt to the chatbot to fix the errors. In the second instance, an employee wanting to optimize code for identifying defects in certain Samsung equipment pasted that code into ChatGPT. The third leak resulted when an employee asked ChatGPT to generate the minutes of an internal meeting at Samsung.”  Samsung has responded by  limiting ChatGPT usage internally and placing controls on employees from asking questions of ChatGPT that were larger than 1,024 bytes.

Recommendations for Mitigation

  • Access controls should be implemented to restrict access to sensitive data only to authorized personnel. This is accomplished through user authentication, authorization, and privilege management. There was recently a story posted on Fox Business introducing a new tool called LLM Shield to help companies ensure that confidential and sensitive information cannot be uploaded to tools like ChatGPT. Essentially, “administrators can set guardrails for what type of data a company wants to protect. LLM Shield then warns users whenever they are about to send sensitive data, obfuscates details so the content is useful but not legible by humans, and stop users from sending messages with keywords indicating the presence of sensitive data.” You can learn more about this tool by visiting their website.
  • Use data encryption techniques to protect data while it’s stored or transmitted. Encryption ensures that data is unreadable without the appropriate decryption key, making it difficult for unauthorized individuals to access sensitive information.
  • Implement data handling procedures so data is protected throughout the entire lifecycle, from collection to deletion. This includes proper storage, backup, and disposal procedures.
  • Regular monitoring and auditing of AI models can help identify any potential data leakage or security breaches. This is done through automated monitoring tools or manual checks.
  • Regular testing and updating of AI models can help identify and fix any vulnerabilities or weaknesses that may lead to data leakage. This includes testing for security flaws, bugs, and issues with data handling and encryption. Regular updates should also be made to keep AI models up-to-date with the latest security standards and best practices.

Risk 2: Data Poisoning

Data poisoning refers to the intentional corruption of an AI model’s training data, leading to a compromised model with skewed predictions or behaviors. Attackers can inject malicious data into the training dataset, causing the model to learn incorrect patterns or biases. This vulnerability can result in flawed decision-making, security breaches, or a loss of trust in the AI system.

I recently read a study entitled “TrojanPuzzle: Covertly Poisoning Code-Suggestion Models” that  discussed the potential for an adversary to inject training data crafted to maliciously affect the induced system’s output. With tools like OpenAi’s Codex models and GitHub CoPilot, this could be a huge risk for organizations leveraging code suggestion models. Using basic methods for attempting poisoning data is detectable by static analysis tools that can remove such malicious inputs from the training set, the study shows that there are more sophisticated ways that allow malicious actors to go undetected.

Using the technique coined TROJANPUZZLE works by injecting malicious code into the training data in a way that is difficult to detect. The malicious code is hidden in a puzzle, which the code-suggestion model must solve in order to generate the malicious payload. The attack works by first creating a puzzle that is composed of two parts: a harmless part and a malicious part. The harmless part is used to lure the code-suggestion model into solving the puzzle. The malicious part is hidden in the puzzle and is only revealed after the harmless part has been solved. Once the code-suggestion model has solved the puzzle, it is then able to generate the malicious payload. The malicious payload can be anything that the attacker wants, such as a backdoor, a denial-of-service attack, or a data exfiltration attack.

Recommendations for Mitigation

  • Carefully examine and sanitize the training data used to build machine learning models. This involves identifying potential sources of malicious data and removing them from the dataset.
  • Implementing anomaly detection algorithms to detect unusual patterns or outliers in the training data can help to identify potential instances of data poisoning. This allows for early intervention before the model is deployed in production.
  • Creating models that are more robust to adversarial attacks can help to mitigate the effects of data poisoning. This can include techniques like adding noise to the training data, using ensembles of models, or incorporating adversarial training.
  • Regularly retraining machine learning models with updated and sanitized datasets can help to prevent data poisoning attacks. This can also help to improve the accuracy and performance of the model over time.
  • Incorporating human oversight into the machine learning process can help to catch potential instances of data poisoning that automated methods may miss. This includes manual inspection of training data, review of model outputs, and monitoring for unexpected changes in performance.

Risk 3: Model Inversion and Membership Inference Attacks

Model Inversion Attacks

Model inversion attacks attempt to reconstruct input data from model predictions, potentially revealing sensitive information about individual data points. The attack works by feeding the model a set of input data and then observing the model’s output. With this information, the attacker can infer the values of the input data that were used to generate the output.

For example, if a model is trained to classify images of cats and dogs, an attacker could use a model inversion attack to infer the values of the pixels in an image that were used to classify the image as a cat or a dog. This information is then be used to identify the objects in the image or to reconstruct the original image.

Model inversion attacks are a serious threat to the privacy of users of machine learning models. They can infer sensitive information about users, such as their medical history, financial information, or location. As a result, it is important to take steps to protect machine learning models from model inversion attacks.

Here is a great walk-thru of exactly how a model inversion attack works. The post demonstrates the approach given in a notebook found in the PySyft repository.

Membership Inference Attacks

Membership inference attacks determine whether a specific data point was part of the training set, which can expose private user information or leak intellectual property. The attack queries the model with a set of data samples, including both those that were used to train the model and those that were not. The attacker then observes the model’s output for each sample and uses this information to infer whether the sample was used to train the model.

For example, if a model is trained to classify images of cats and dogs, an attacker would a membership inference attack to infer whether a particular image was used to train the model. The attacker would do this by querying the model with a set of images, including both cats and dogs, and observing the model’s output for each image. If the model classifies the images as a cat or dog if it was used to train the model, then the attacker is able to infer that the image was used to train the model.

Membership inference attacks are a serious threat to the privacy of users of machine learning models. They are leveraged to infer sensitive information about users, such as their medical history, financial information, or location. 

Recommendations for Mitigation

  • Differential privacy is a technique that adds noise to the output of a machine learning model. This ensures that the attacker cannot infer any individual’s data from the output.
  • The training process for a machine learning model should be secure. This will prevent attackers from injecting malicious data into the training data.
  • Use a secure inference process. The inference process needs to be secure to prevent attackers from inferring sensitive information from the model’s output.
  • Design the model to prevent attackers from inferring sensitive information from the model’s parameters or structure.
  • Deploy the model in a secure environment to prevent attackers from accessing the model or its data.

The adoption of chatbots and other AI language models such as ChatGPT can greatly enhance business processes and customer experiences. However, it also comes with new risks and challenges. One major risk is the potential for data leakage and privacy concerns. As discussed, these can compromise the security and accuracy of AI models. Another risk is data poisoning, where malicious actors can intentionally corrupt an AI model’s training data. This ultimately leads to flawed decision-making and security breaches.  Finally, model inversion and membership inference attacks can reveal sensitive information about users.

To mitigate these risks, businesses should implement access controls. They should also use the most modern and secure data encryption techniques. Lastly, seek to leverage data handling procedures, regular monitoring and testing, and incorporate human oversight into the machine learning process. Using differential privacy and a secure deployment environment can help protect machine learning models from these threats. It is crucial that businesses stay vigilant and proactive as they continue to adopt and integrate AI technologies into their operations.

The post Risks of Chatbot Adoption: Protecting AI Language Models from Data Leakage, Poisoning, and Attacks appeared first on The Official Blog of Adam DiStefano, M.S., CISSP.

]]>
https://cybersecninja.com/risks-of-chatbot-adoption-protecting-ai-language-models-from-data-leakage-poisoning-and-attacks/feed/ 0
NLP Query to SQL Query with GPT: Data Extraction for Businesses https://cybersecninja.com/nlp-to-sql-with-chatgpt/ https://cybersecninja.com/nlp-to-sql-with-chatgpt/#respond Mon, 17 Apr 2023 19:49:13 +0000 https://cybersecninja.com/?p=120 Have you ever struggled with extracting useful information from a large database? Maybe you wanted to find out how many customers bought a certain product last month, or what the total revenue was for a specific time period. It can be a daunting task to manually search through all the data and compile the results....

The post NLP Query to SQL Query with GPT: Data Extraction for Businesses appeared first on The Official Blog of Adam DiStefano, M.S., CISSP.

]]>
Have you ever struggled with extracting useful information from a large database? Maybe you wanted to find out how many customers bought a certain product last month, or what the total revenue was for a specific time period. It can be a daunting task to manually search through all the data and compile the results. Fortunately, with recent advancements in natural language processing (NLP), machines can now understand and respond to human language, making it easier than ever to query databases using natural language commands. This is where ChatGPT comes in. In this post, we will build a proof of concept application to build a NLP query to SQL query using OpenAi’s GPT model.

What is Natural Language Processing (NLP)?

Natural Language Processing, or NLP, is a branch of artificial intelligence that focuses on enabling machines to understand and interact with human language. In simpler terms, NLP is the ability of machines to read, understand, and generate human language. NLP allows machines to process and analyze vast amounts of natural language data, such as text, speech, and even gestures, and converts them into structured data that is used for analysis and decision-making, through a combination of algorithms, machine learning, and linguistics. For example, a machine using NLP might analyze a text message and identify the sentiment behind it, such as whether the message is positive, negative, or neutral. Or it might identify key topics or entities mentioned in the message, such as people, places, or products.

How Does NLP Work?

NLP uses a combination of algorithms, statistical models, and machine learning to analyze and understand human language. Below are the basic steps involved in the NLP process:

  1. Tokenization: The first step in NLP is to tokenize the data. The text is broken down into pieces of text or speech into individual units, or tokens, such as words, phrases, or sentences.
  2. Parsing: This process involves analyzing the grammatical structure of the text to identify the relationships between the tokens. This helps the machine understand the meaning of the text.
  3. Named entity recognition: NER is the process of identifying and classifying named entities in text, such as people, places, and organizations. This helps the machine understand the context of the text and the relationships between different entities.
  4. Sentiment analysis: Sentiment analysis involves determining the overall sentiment or emotional tone of a piece of text, such as whether it is positive, negative, or neutral. Many social media companies leverage this for monitoring, customer feedback analysis, and other applications.
  5. Machine learning: NLP algorithms are trained using machine learning techniques to improve their accuracy and performance over time. By analyzing large amounts of human language data, the machine can learn to recognize patterns and make predictions about new text it encounters.

What is ChatGPT?

ChatGPT is a powerful language model based on the GPT-3.5 architecture that can generate human-like responses to natural language queries. This means that you can interact with ChatGPT in the same way you would with a human, using plain language to ask questions or give commands. But instead of relying on intuition and experience to retrieve data, ChatGPT uses its NLP capabilities to translate your natural language query into a structured query language (SQL) that can then be used to extract data from a database.
So how does this work? Let’s say you have a database of customer orders, and you want to find out how many orders were placed in the month of March. You could ask ChatGPT something like “How many orders were placed in March?” ChatGPT would then use its NLP capabilities to understand the intent of your query, and translate it into a SQL query that would retrieve the relevant data from the database. The resulting SQL query might look something like this:
SELECT COUNT(*) FROM orders WHERE order_date >= '2022-03-01' AND order_date < '2022-04-01';

This SQL query would retrieve the number of rows (orders) where the order date falls within the month of March, and return the count of those rows. Executives who desire to have these results traditionally rely on skilled database administrators to craft the desired query. These DBA’s then need to validate that the data meets the needs and requirements that were requested. This is a time consuming process as the requests can be much more complex than the example above.

Benefits of Leveraging ChatGPT

Using ChatGPT to extract insights from databases can provide numerous benefits to businesses. Here are some of the key advantages:

  1. Faster decision-making: By using ChatGPT to quickly and easily retrieve data from databases, businesses can make more informed decisions in less time. This improved velocity is especially valuable in fast-paced industries where decisions need to be made quickly.
  2. Increased efficiency: ChatGPT’s ability to extract data from databases means that employees can spend less time manually searching for and compiling data, and more time analyzing and acting on the insights generated from that data. This can lead to increased productivity and efficiency.
  3. Better insights: ChatGPT helps businesses uncover insights that may have been overlooked or difficult to find using traditional data analysis methods. Leveraging NLP to generate natural language queries, ChatGPT helps users explore data in new ways and uncover insights that may have been hidden.
  4. Improved collaboration: Because ChatGPT can be used by anyone in the organization, regardless of their technical expertise, it can help foster collaboration and communication across departments. This can help break down silos and promote a culture of data-driven decision-making throughout the organization.
  5. Easy-to-understand data: ChatGPT can help executives easily access and understand data in a way that is intuitive and natural. This enables the use of plain language to ask questions or give commands, and ChatGPT will generate SQL queries that extract the relevant data from the database. This means that executives can quickly access the information they need without having to rely on technical jargon or complex reports.

Building a NLP Query to SQL Query GPT Application

Before we get started, it is important to note that this is simply a proof of concept application. We will be building a simple application to convert a natural language query into an SQL query to extract sales data from an SQL database. Since it is simply a proof of concept, we will be using a SQL database in memory. In production, you would want to connect directly to the enterprise database.

This project can be found on my GitHub.

The first step for developing this application is to ensure you have an API key from OpenAPI.

Obtaining an API Key from OpenAi

To get a developer API key from OpenAI, you need to sign up for an API account on the OpenAI website. Here’s a step-by-step guide to help you with that process:

  1. Visit the OpenAI website
  2. Click on the “Sign up” button in the top-right corner of the page to create an account. If you already have an account, click on “Log in” instead.
  3. Once you’ve signed up or logged in, visit the OpenAI API portal
  4. Fill in the required details and sign up for the API. If you’re already logged in, the signup process might be quicker.
  5. After signing up, you’ll get access to the OpenAI API dashboard. You may need to wait for an email confirmation or approval before you can use the API.
  6. Once you have access to the API dashboard, navigate to the “API Keys” tab
  7. Click on “Create new API key” to generate a new API key. You can also see any existing keys you have on this page.

IMPORTANT: Make sure you keep your API key secure, as it is a sensitive piece of information that can be used to access your account and make requests on your behalf. Don’t share it publicly or include it in your code directly. Store it in a separate file or use environment variables to keep it secure.

Step 1: Development Environment

This project was created using Jupyter notebook. You can install Jupyter locally as a standalone program on your device. To learn how to install Jupyter, visit their website here. Jupyter also comes installed on Anaconda and you can use the notebook there. To learn more about Anaconda, visit their documentation here. Lastly, you can use Google Colab to develop. Google Colab, short for Google Colaboratory, is a free, cloud-based Jupyter Notebook environment provided by Google. It allows users to write, execute, and share code in Python and other supported languages, all within a web browser. You can start using Google Colab by visiting here.

Note: You must have a Google account to use this service.

Step 2: Importing Your Libraries

For this project, the following Python libraries were used:

  • OpenAi (see the documentation here)
  • OS (see the documentation here)
  • Pandas (see documentation here)
  • SQLAlchemy (see documentation here)

#Import Libraries
import openai
import os
import pandas as pd
import sqlalchemy

#Import these libraries to setup a temp DB in RAM and PUSH Pandas DF to DB
from sqlalchemy import create_engine
from sqlalchemy import text

Step 3: Connecting Your API Key to OpenAi

For this project, I have created a text file to pass my API key to avoid having to hard code my key into my code. We could have set it up as an environment variable, but we would need to associate the key each time we begin a new session. This is not ideal. It is important to note that the text file must be in the same directory as the notebook to use this method.

#Pass api.txt file
with open('api.txt', 'r') as f:
    openai.api_key = f.read().strip()

Step 4: Evaluate the Data

Next, we will use the pandas library to evaluate the data. We start by creating a dataframe from the dataset and reviewing the first five rows.

#Read in data
df = pd.read_csv("sales_data_sample.csv")

#Review data
df.head()

Step 5: Create the In-Memory SQLite Database

This code snippet creates a SQLAlchemy engine that connects to an in-memory SQLite database. Here’s a breakdown of each part:

  1. create_engine: This is a function from SQLAlchemy that creates an engine object, which establishes a connection to a specific database.
  2. 'sqlite:///memory:': This is a connection string that specifies the database type (SQLite) and its location (in-memory). The triple forward slash (///) is used to denote an in-memory SQLite database.
  3. echo=True: This is an optional argument that, when set to True, enables logging of generated SQL statements to the console. It can be helpful for debugging purposes.

#Create temp DB
temp_db = create_engine('sqlite:///memory:', echo = True)

Step 6: Pushing the Dataframe to the Database Created Above

In this step, we will use the to_sql method from the pandas library to push the contents of a DataFrame (df) to a new SQL table in the connected database.

#Push the DF to be in SQL DB
data = df.to_sql(name = "sales_table", con = temp_db)

Step 7: Connecting to the Database

This code snippet connects to the database using the SQLAlchemy engine (temp_db) and executes a SQL query to get the sum of the SALES column from the Sales table. We will also review the output. Here’s a breakdown of the code:

  1. with temp_db.connect() as conn:: This creates a context manager that connects to the database using the temp_db engine. It assigns the connection to the variable conn. The connection will be automatically closed when the with block ends.
  2. results = conn.execute(text("SELECT SUM(SALES) FROM Sales")): This line executes a SQL query using the conn.execute() method. The text() function is used to wrap the raw SQL query string, which is "SELECT SUM(SALES) FROM Sales". The query calculates the sum of the SALES column from the Sales table. The result of the query is stored in the results variable.

#Connect to SQL DB
with temp_db.connect() as conn:
    results = conn.execute(text("SELECT SUM(SALES) FROM Sales"))

#Return Results
results.all()

Step 8: Create the Handler Functions for GPT-3 to Understand the Table Structure

This code snippet defines a Python function called create_table_definition that takes a pandas DataFrame (df) as input and returns a string containing a formatted comment about an SQLite SQL table named Sales with its columns.

# SQLite SQL tables with their properties:
# -----------------------------------------
# Employee (ID, Name, Department_ID)
# Department (ID, Name, Address)
# Salary_Payments (ID, Employee_ID, Amount, Date)
# -----------------------------------------
#Create a function for table definitions
def create_table_definition(df):
    prompt = """### sqlite SQL table, with its properties:
    #
    # Sales({})
    #
    """.format(",".join(str(col) for col in df.columns))
    
    return prompt

To review the output:

#Review results
print(create_table_definition(df))

Step 9: Create the Prompt Function for NLP

#Prompt Function
def prompt_input():
    nlp_text = input("Enter desired information: ")
    return nlp_text

#Validate function
prompt_input()

Step 10: Combining the Functions

This function defines a Python function called combined that takes a pandas DataFrame (df) and a string (query_prompt) as input and returns a combined string containing a formatted comment about the SQLite SQL table and a query prompt.

#Combine these functions into a single function
def combined(df, query_prompt):
    definition = create_table_definition(df)
    query_init_string = f"###A query to answer: {query_prompt}\nSELECT"
    return definition + query_init_string

Here, we grab the NLP input and insert the table definitions.:

#Grabbing natural language
nlp_text = prompt_input()

#Inserting table definition (DF + query that does... + NLP)
prompt = combined(df, nlp_text)

Step 11: Generating the Response from the GPT-3 Language Model

This code snippet calls the openai.Completion.create() method from the OpenAI API to generate a response using the GPT-3 language model. The specific model used here is ‘text-davinci-002’. The prompt for the model is generated using the combined(df, nlp_text) function, which combines a comment describing the SQLite SQL table (based on the DataFrame df) and a comment describing the SQL query to be written. Here’s a breakdown of the method parameters:
  1. model='text-davinci-002': Specifies the GPT-3 model to be used for generating the response, in this case, ‘text-davinci-002’.
  2. prompt=combined(df, nlp_text): The prompt for the model is generated by calling the combined() function with the DataFrame df and the string nlp_text as inputs.
  3. temperature=0: Controls the randomness of the model’s output. A value of 0 makes the output deterministic, selecting the most likely token at each step.
  4. max_tokens=150: Limits the maximum number of tokens (words or word pieces) in the generated response to 150.
  5. top_p=1.0: Controls the nucleus sampling, which keeps the probability mass for the top tokens whose cumulative probability exceeds the specified value (1.0 in this case). A value of 1.0 includes all tokens in the sampling, so it is effectively equivalent to using greedy decoding.
  6. frequency_penalty=0: Controls the penalty applied based on token frequency. A value of 0 means no penalty is applied.
  7. presence_penalty=0: Controls the penalty applied based on token presence in the input. A value of 0 means no penalty is applied.
  8. stop=["#", ";"]: Specifies a list of tokens that, if encountered by the model, will cause the generation to stop. In this case, the generation will stop when it encounters a “#” or “;”.

The openai.Completion.create() method returns a response object, which is stored in the response variable. The generated text can be extracted from this object using response.choices[0].text.

#Generate GPT Response
response = openai.Completion.create(
            model = 'text-davinci-002',
            prompt = combined (df, nlp_text),
            temperature = 0,
            max_tokens = 150,
            top_p = 1.0,
            frequency_penalty = 0,
            presence_penalty = 0,
            stop = ["#", ";"]
)

Step 12: Format the Response

Finally, we right a function to format the response from the GPT application:

#Format response
def handle_response(response):
    query = response['choices'][0]['text']
    if query.startswith(" "):
        query = 'SELECT' + query
    return query

Running the following snippet will return the desired NLP query to SQL query input:

#Get response
handle_response(response)

Your output should now look something like this:

"SELECT * FROM Sales WHERE STATUS = 'Shipped' AND YEAR_ID = 2003 AND QTR_ID = 3\n

In this post, we demonstrated a very simplistic way to take a NLP query to SQL query using an in-memory SQL database. This was a simple proof of concept. In future posts, we will expand this application to show more enterprise ready applications such as incorporating this into PowerBI and connecting to a production ready database which is more reflective of a real world application.

The post NLP Query to SQL Query with GPT: Data Extraction for Businesses appeared first on The Official Blog of Adam DiStefano, M.S., CISSP.

]]>
https://cybersecninja.com/nlp-to-sql-with-chatgpt/feed/ 0