Test Script Generation Using Machine Learning and ChatGPT API

This is an enhancement of the lesson one where the machine learning script can be improved with the LLM features of ChatGPT

from openai import OpenAI
import os
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split

# Set your OpenAI API key
os.environ['OPENAI_API_KEY'] = 'your-api-key-here'
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

# Example test data for the ML model
data = {
    'user_name': [1, 1, 0, 0, 1, 0, 0, 1],  # 1: Valid, 0: Invalid
    'email': [1, 0, 1, 1, 0, 1, 0, 1],     # 1: Valid, 0: Invalid
    'address': [1, 1, 1, 0, 1, 1, 0, 0],   # 1: Valid, 0: Invalid
    'book_id': [123, 124, 125, 126, 123, 127, 125, 128],  # Example book IDs
    'quantity': [1, 2, 1, 1, 0, 2, 1, 3],  # Quantity of books ordered
    'payment_valid': [1, 1, 0, 0, 1, 0, 1, 1],  # 1: Valid payment, 0: Invalid
    'order_outcome': [1, 1, 0, 0, 0, 0, 1, 1]  # 1: Pass, 0: Fail
}

# Create a DataFrame
df = pd.DataFrame(data)

# Features and Labels
X = df[['user_name', 'email', 'address', 'payment_valid', 'quantity']]  # Features
y = df['order_outcome']  # Label

# Split the dataset into training and testing sets (80% training, 20% testing)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize and train the Decision Tree Classifier
clf = DecisionTreeClassifier()
clf = clf.fit(X_train, y_train)

# Now we will use the model to generate test cases for new orders
new_orders = pd.DataFrame({
    'user_name': [1, 0],
    'email': [1, 1],
    'address': [1, 0],
    'payment_valid': [1, 0],
    'quantity': [2, 1]
})

# Predict outcomes for the new orders
predicted_outcomes = clf.predict(new_orders)

# Generate the natural language test case descriptions using ChatGPT
for i, row in new_orders.iterrows():
    outcome = 'Pass' if predicted_outcomes[i] == 1 else 'Fail'

    test_case_description = f"Test case {i+1}: Test a scenario where user_name is {row['user_name']}, " \
                            f"email is {row['email']}, address is {row['address']}, payment_valid is {row['payment_valid']}, " \
                            f"and quantity ordered is {row['quantity']}. The expected outcome should be {outcome}."

    # Use the new ChatCompletion API to generate detailed test cases
    response = client.chat.completions.create(
        model="gpt-4",  # You can use "gpt-4" if available
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": f"Given the following test case scenario: {test_case_description}, generate a detailed test script."}
        ]
    )

    # Correct way to access the response data (using dot notation)
    print(f"Test Case {i+1}:")
    print(response.choices[0].message.content.strip())
    print("\n")

Steps Explained:

1. Dataset Preparation:

You first prepare a dataset similar to the example provided. It contains features like user_name, email, address, payment_valid, etc., and the expected outcome of the test (order_outcome).

2. Machine Learning Model:

The Decision Tree classifier is trained on this dataset to predict the outcome of new test inputs. By learning the relationships between the input features and the test outcomes, the model can generalize and make predictions for unseen test cases.

3. Generating New Test Inputs:

After training the model, you can input new test cases (represented by new_orders). The model will predict outcomes (pass or fail) for these inputs, giving you an idea of what scenarios should pass or fail based on historical data.

4. Using ChatGPT for Script Generation:

The predicted outcomes from the ML model are then passed to the GPT-3 model (via OpenAI API). In this step, you provide a description of the inputs and expected results, and ChatGPT generates a detailed natural language test script. ChatGPT can elaborate on the steps needed to execute the test case, making it easier to follow and understand.


Test Case 1:
Test Script:

Title: Testing user information and order quantity 

Test Case ID: 001

1. Test Objective: 
To validate functionality of the system where the parameters: user_name, email, address, payment_valid are all set to valid (1) and the quantity ordered is greater than 1.

2. Precondition: 
Make sure the system is available and the user is able to enter their details. 

3. Test Steps: 
3.1 Open the system/user interface.
3.2 Input user_name as 1 (assumption: 1 = valid).
3.3 Input email as 1 (assumption: 1 = valid).
3.4 Input address as 1 (assumption: 1 = valid).
3.5 Set payment_valid as 1 (assumption: 1 = valid).
3.6 Set quantity ordered as 2.
3.7 Click on the submit button/ confirm order option or any other method that submits this information to the system.

4. Expected Result:
4.1 The system should confirm the user's information is valid.
4.2 It should accept the quantity ordered as 2.
4.3 The application should not show any error warnings.
4.4 The order should be successfully placed and processed.

5. Actual Result: 
Record what occurred when you ran the test.

6. Post-Condition: 
Check that the quantity of items in the inventory is reduced by 2.
Ensure that a confirmation email/notification has been sent to the user.
Check that the payment has been deducted from the user's account.

7. Status (Pass/Fail): 
If Expected Result and Actual Result match, the test case is a Pass.

8. Notes/Issues: 
Record any additional comments or problems you encountered during testing. 

End of test script.


Test Case 2:
Test Script Title: Verification of User Information and Order Validation

Test Script ID: TS002

Preconditions:
1. The user has access to the ordering platform.
2. The fields for user_name, email, address, payment_method and quantity are available for input.
3. User fills in the details.

STEP 1:
- Description : Input data in the field 'user_name.' 

- Action : Leave the 'user_name' field empty or input '0'.
 
- Expected Result : System should display an error highlighting that the 'user_name' cannot be '0'.

STEP 2:
- Description : Input data in the field 'email.'

- Action : Type '1' in the 'email' field.

- Expected Result : System should validate and accept the given input.

STEP 3:
- Description : Input data in the 'address' field.

- Action : Leave the 'address' field empty or input '0.' 

- Expected Result : System should display an error highlighting that the 'address' cannot be '0'.

STEP 4:
- Description : Validate the payment.

- Action : Leave the 'payment_valid' field unchecked or '0.'

- Expected Result : System should prompt an error message stating that the 'payment_valid' must be checked before proceeding.

STEP 5:
- Description : Input data in the 'quantity ordered' field.

- Action : Type '1' in the 'quantity ordered' field.

- Expected Result : System should validate and accept the 'quantity ordered' input.

Post Condition (Final Test):
- Description : Submit the order.

- Action : Click the 'Submit' or 'Order' button.

- Expected Result : The system should display a fail message or error due to the errors in the 'user_name', 'address', and 'payment_valid' fields.

Remarks: This test case is designed to verify the order validation process when a user inputs inappropriate values '0' for 'user_name', 'address' and in 'payment_valid' field while using the ordering platform. As these fields are critical for order completion, the system should not process the order and must display a fail message or error.

As you see you can improve the prompt lmm to make sure that you get a good organization of testcases and steps

messages=[ {“role”: “system”, “content”: “You are a helpful assistant.”}, {“role”: “user”, “content”: f"Given the following test case scenario: {test_case_description}, generate a detailed test script."} ]

so an example message how you can improve it is: The outline should include a “test case” , followed by a number. Example is testcase 1, steps should be organized by Step 1, Step 2 etc. A teststep should have a direct expected result

** TRY IT YOURSELF ** Google Colab Link

Advantages of this Approach:

1. Automated Test Generation:

The model predicts the pass/fail outcome and generates detailed test steps based on those predictions. This reduces the manual effort required to write test cases, especially for repetitive or similar scenarios.

2. Scalability:

As you train the model on more data, it improves over time, making better predictions for different types of inputs. This ensures that your testing process adapts and scales with the complexity of the application.

3. Natural Language Generation:

ChatGPT helps you generate human-readable and easy-to-follow test scripts. This makes it easier for testers and developers to understand and execute the generated test cases without ambiguity.