Importing Test Cases from Xray and Spreadsheets into GPT Driver
- Christian Hartung
- 25. Sept.
- 13 Min. Lesezeit
Migrating existing test cases from a tool like Jira Xray or a spreadsheet into an automation platform is a common challenge. Teams often have hundreds of manually written test cases in Xray or Excel/CSV files, and re-writing them for a new framework can be daunting. This guide explains why importing test cases is critical, examines traditional migration methods (and their pitfalls), and shows how GPT Driver streamlines the process with native import tools. We’ll also walk through an example of importing tests from a spreadsheet and running them on real devices via GPT Driver.
Why Manual Test Case Migration Is Painful
Test case imports are often tedious and block automation progress. In many organizations, test cases start off in test management tools (like Xray) or simple spreadsheets, written in plain language. Converting these into automated tests for frameworks like Appium or XCUITest traditionally means manual re-authoring. QA engineers must translate each test step into code or scripts, which is slow and error-prone, delaying automation adoption. It’s not just the time—manual conversion can introduce inconsistencies between the source test case and the automated version, especially if steps are misinterpreted or skipped. In fact, manual test procedures maintained in siloed documents often end up duplicated or out-of-sync over time, leading to “inefficient reuse and wasted effort”.
Several factors make this migration difficult:
Tooling Fragmentation: Manual test cases in Xray (Jira) or Excel follow a different format from code-based tests. There’s no one-click “export to Appium” – testers end up copy-pasting and translating steps.
Format Mismatches: Xray stores tests as Jira issues with fields (pre-conditions, steps, expected results), while spreadsheets use simple text columns. Automation frameworks require structured code or keywords, so a direct import isn’t straightforward without mapping each field.
Manual Data Entry: Without specialized import tools, teams resort to retyping or scripting conversions. This not only wastes time but risks human error (typos, missing steps). One tester lamented how Xray’s own importer demands a rigid CSV/XML template for each test case, making them “sit and write an Excel file for every story” – a clear productivity drag.
Traditional Approaches to Migrating Tests
Teams have tried various methods to move tests from management tools or spreadsheets into automation. Each has pros and cons:
Hand Re-Coding Tests: The most straightforward (if painful) method is assigning engineers to manually rewrite each test case as an automated script. This ensures the new tests are tailor-made for the framework and lets you refine steps during coding. However, it’s extremely time-consuming and doesn’t scale. It’s easy to make mistakes or skip details, and there’s a risk of divergence where the manual test documentation no longer matches the automated test behavior. Valuable QA time gets spent on rote translation instead of actual testing.
Custom Scripts or Conversion Tools: Some teams invest in one-off scripts to parse existing test cases (e.g. reading a CSV export of Xray or an Excel file) and generate skeleton test code. This can jump-start the process by automating the conversion of test steps into code statements or data tables. The downside? These scripts have to handle a lot of variability in natural-language steps. Writing a reliable parser is complex, often requiring maintenance whenever the input format changes. Additionally, the generated code usually needs further editing and debugging. In short, you trade initial manual work for upfront scripting effort – helpful for large suites, but not trivial to implement.
Plugins and Integrations: Another approach is to use existing integrations between test management and automation tools. For example, some test management systems offer APIs to pull test cases or push results. Xray itself provides a CSV Importer to bring manual tests into Jira, and other tools like TestRail or Zephyr have APIs that automation frameworks can leverage. A few AI-driven testing platforms (e.g. testRigor) even encourage copy-pasting test steps in plain English and letting the tool interpret them. The pro here is potential reduction in manual copy-paste; the con is that many integrations only cover results or require strict formatting. Often you still need to adjust test steps to match what the automation expects (e.g. ensuring identifiers or element names are provided). In practice, robust two-way sync between tools is rare, so teams end up maintaining tests in two places or doing a lot of prep work.
How GPT Driver Simplifies Test Case Import
GPT Driver takes a different, AI-assisted approach to importing test cases. It was built with the assumption that teams already have tests written somewhere, and provides built-in tools to pull those into its no-code/low-code environment. In fact, “Import your existing tests” is a core feature of GPT Driver. Here’s how it works and why it’s easier:
Native Import from Xray: GPT Driver can connect to Jira Xray to fetch existing test cases, so you don’t have to manually export or rebuild them. Typically this is done via API token – you point GPT Driver to your Xray test repository (or export a CSV from Xray) and it will ingest the test cases, including names, descriptions, and step details. This avoids the painful CSV template dance that Xray’s default importer requires. The import process is also idempotent, meaning if you run it again (say after updating some tests in Xray), GPT Driver will update the existing tests rather than create duplicates.
Spreadsheet (CSV/Excel) Import: For teams that manage test cases in Excel or Google Sheets, GPT Driver offers a direct uploader. You can drop in a .csv or Excel file and the platform will parse it into structured tests. Each row (or each test case entry) becomes a test in GPT Driver, and each step in the description becomes an actionable step. The importer is flexible about mapping columns – e.g. you can designate which column is the “Test Step” vs “Expected Result”. It’s designed to handle the common formats used in manual test docs, so you don’t need a perfect template. If your spreadsheet has a “Step” and “Expected” column, those can translate into a GPT Driver command and an assertion, respectively.
AI-Powered Step Mapping: A standout benefit of GPT Driver is how it uses natural language understanding to convert plain English steps into executable actions. Instead of requiring you to tag every step with a selector or function, GPT Driver’s AI interprets instructions like “Tap on the Login button” or “Verify the welcome message is displayed”. Under the hood, it maps these to the appropriate UI interactions or validations in the app. In GPT Driver’s model, each imported test step becomes a GPT Driver “Command” in the no-code editor. For example, a test case with 5 steps in Xray will turn into a GPT Driver test with 5 commands (plus any setup/teardown logic). This dramatically cuts down the manual effort of figuring out automation code for each step – the heavy lifting is handled by GPT Driver’s AI Agent.
Preserving Structure and Metadata: When importing, GPT Driver strives to keep your test suite organized. Test hierarchies or groupings are retained (an Xray test set or a spreadsheet section can become a folder of tests in GPT Driver). Fields like priority, tags, or components can be carried over as well, mapping into GPT Driver’s own tagging system for filtering and reporting. Even test preconditions are not lost – if a manual test had a “Precondition” defined (e.g. “User must be logged in”), GPT Driver can convert that into either a setup step or a dependency on another test. In fact, when importing from TestRail, GPT Driver automatically creates a dependency chain if multiple tests are linked as prerequisites. This means your workflow logic (like setup steps or shared initial steps) remain intact after import.
Seamless Device Cloud Integration: Once your test cases are imported, they’re immediately executable on real devices or emulators through GPT Driver. There’s no need to rewrite them in Appium or modify for different platforms. GPT Driver can run the tests on iOS and Android device clouds by mapping the plain steps to actual UI events at runtime. You simply select the target environment (e.g. choose a BrowserStack or AWS Device Farm device from GPT Driver’s interface) and hit run. The ability to take a test case that lived in a spreadsheet and instantly execute it on a physical phone is a game-changer for speed. All the plumbing with Appium/WebDriver and device farms is handled by GPT Driver behind the scenes. Moreover, because the tests are now in a fully automated suite, you can integrate them into CI pipelines easily – for example, using GPT Driver’s API or CLI to trigger test runs as part of your build process. (When you import via the API, GPT Driver even returns the new test IDs so you can immediately call the execution endpoint on them.) In short, imported tests are CI-ready.
Extend and Evolve Tests with AI Features: Importing your cases is not the end – it’s the beginning of making them smarter. GPT Driver allows you to extend these tests with AI-powered features that weren’t possible in a manual test case. For instance, you can add an “exploratory step” where the AI tries random inputs, or use GPT Driver’s Vision capabilities to handle unexpected pop-ups. Since the core logic of your original test is now automated, your team can focus on enhancing coverage (maybe adding more assertions, or data-driving the tests) rather than worrying about basic conversion. This preserves the investment you made in writing those tests originally, while upgrading them with automation and intelligence.
Best Practices for Importing Tests into GPT Driver
To get the most out of the import process, consider these practical recommendations:
Audit and Clean Your Test Cases First: Before importing, take a quick pass through your Xray or spreadsheet tests to ensure they’re well-structured. Consistent, clear steps will translate best. For example, make sure each step has a clear action and expected result. Remove any purely manual testing notes that don’t make sense in automation (like “ask the user if they felt the vibration” – such steps need redesign for automation or can be flagged to skip). If using Xray, you might export the tests to CSV to see their content in one view and clean up any odd formatting or placeholder text.
Organize by Suites or Tags: It’s wise to import in logical groups. For instance, export/import your “Smoke Tests” separately from “Full Regression” tests. GPT Driver will allow you to place imported tests into folders or suites, matching how you select them. Preserving this structure makes it easier to run subsets of tests later (e.g. running just the smoke suite on every pull request, and the full suite nightly). Use tags or naming conventions that existed in your spreadsheet or Xray (such as components or priority) to tag tests in GPT Driver for easy filtering.
Follow the Import Workflow: In GPT Driver’s Studio interface, navigate to the Import feature. You’ll typically have options like “Import from Xray” or “Import from CSV/Excel”. For Xray, have your Jira URL and API token ready (or use a CSV export if an API route isn’t available). For spreadsheets, upload the file or paste the content as prompted. GPT Driver will parse the input and show a preview of the test cases to import. Take advantage of this preview to spot-check that fields are mapping correctly (e.g. ensure the first step of each test looks right). Then proceed with the import. The system will create each test case in your GPT Driver project; you should see them appear with the same titles and steps as the source.
Review and Adjust Mappings if Needed: After import, open a couple of the new tests in GPT Driver’s editor. Verify that the steps make sense in context. Thanks to GPT Driver’s AI, many steps will already be tied to UI elements or actions. However, you might need to do minor tweaks:
For example, if your test said “Click the Submit button” and your app has multiple buttons labeled “Submit,” the importer might not know which one to target. GPT Driver might flag such steps for your input (e.g. asking you to select the correct element). Resolve these by using the built-in recorder or UI inspector to bind the step to the right element.
If certain preconditions weren’t explicitly documented as steps (e.g. “user must be logged in” was a note in Xray), you may need to add a login step or mark another test as a dependency. GPT Driver supports dependencies (one test running before another), so you can link a setup test to run before a batch of imported tests if needed.
Check assertions: a manual test’s expected result like “Expected: Home screen is shown” should have become an assertion step (e.g. Check that “Home” appears on screen). Make sure these got captured as validation commands and refine any that are too vague.
Connect to Device Clouds and Data: Ensure your GPT Driver environment is hooked up to the necessary infrastructure to run the tests. If you need to test on multiple devices, configure your device cloud credentials in GPT Driver (for example, add your BrowserStack or Sauce Labs integration). If your tests rely on specific test data (e.g. user accounts or configurations), set those up as well. GPT Driver allows parameterization and test data injection, so you can import test data CSVs or define variables. Doing this upfront means when you execute the imported tests, they won’t fail due to missing data or environment issues.
Run in CI/CD: Finally, incorporate your newly imported tests into your continuous integration pipeline. GPT Driver provides both a web interface and an API/SDK for triggering tests. You can use the test suite IDs or names to run groups of tests as part of your CI build. For example, you might add a step in your Jenkins or GitHub Actions pipeline to call GPT Driver’s API to run the “Smoke Tests” suite on every deployment. Because GPT Driver can output results in real time (and even push results back to test management like TestRail), your CI can treat these just like any other automated tests. Monitor the first few runs to catch any flaky steps (occasionally an imported test may need a longer wait or slight adjustment for automation). Over time, you’ll gain confidence that your once-manual tests are now reliably part of the automated pipeline.
Example: Importing a Spreadsheet of Smoke Tests
Let’s walk through a concrete scenario. Suppose your team has an Excel spreadsheet called “Rideshare App Smoke Tests” containing a list of high-level test cases for a mobile app. Each test case has a title, a sequence of steps, and expected outcomes. Here’s how you could import and execute them in GPT Driver:
Prepare the Spreadsheet: Convert the Excel file to CSV format (if it’s not already). Ensure the first row has clear headers like “Test Case ID”, “Title”, “Steps”, “Expected Result” (GPT Driver’s importer can often auto-detect these). For example:
Test Case ID | Title | Steps (each step on new line) | Expected Result |
ST-1 | Verify login functionality | 1. Launch the app2. Enter valid username3. Enter valid password4. Tap “Login” | User is logged into the home screen |
ST-2 | New user sign-up via email | 1. Launch the app2. Tap “Sign Up”3. Fill in details4. Submit form | Account is created and user is on onboarding screen |
Make sure each step is on a new line within the “Steps” cell (or some consistent delimiter that the importer can split on).
Use GPT Driver’s Import Tool: In the GPT Driver Studio, navigate to the Import Tests section. Choose Spreadsheet/CSV Import. Upload the SmokeTests.csv file. GPT Driver will parse the file – you’ll likely see a preview listing “Verify login functionality” and “New user sign-up via email” as test titles, with the steps broken out. Confirm that the steps and expected results are recognized correctly (e.g. it might show 4 commands for the login test corresponding to the 4 steps).
Import and Create Tests: Proceed with the import. GPT Driver will create each test case in your project. Now, in your GPT Driver test library, you should see a “Verify login functionality” test. Opening it reveals a sequence of steps written in plain English (GPT Driver automatically transforms each line into a command). For instance, step 4 might appear as a command: Tap on “Login”. The expected result might appear as a verification command: Check that home screen is visible. These commands are powered by GPT Driver’s AI, meaning it knows to interpret “Launch the app” as instructions to open the application, “Enter valid username” as a text input action, and so on.
Map UI Elements (if needed): If the app was already connected to GPT Driver (say you have uploaded the app build and maybe done a test run before), the importer might have already linked generic actions like “Tap Login” to a specific UI element (using text or accessibility ID). If any step is ambiguous, GPT Driver will highlight it. You can use the visual editor to quickly map that step: for example, it might prompt you to run the app in a preview mode where you click the “Login” button to teach the AI which element corresponds to “Login”. This one-time mapping will apply to all tests that use “Tap "Login"” going forward. In our example, since multiple smoke tests likely start with “Launch the app”, GPT Driver would reuse that action across tests without you repeating the mapping.
Run the Tests on Devices: Now comes the fun part – execution. Select the imported smoke tests (you can run them individually or as a suite). Choose a device configuration, such as “Pixel 7 - Android 13” on a cloud provider, or an iPhone simulator, depending on your coverage needs. Hit Run. GPT Driver will automatically launch the app on that device and step through each imported test case using the commands. For the login test, you’ll see it open the app, fill in the credentials (you might have to specify a sample username/password if not provided – GPT Driver can handle test data via variables or a simple edit to the command, like Enter text “demo_user” into Username field), tap Login, and then verify the home screen. All of this happens without you writing a single line of code – the steps from your spreadsheet are driving the automation. If the home screen appears as expected, GPT Driver will mark the test passed. If something goes wrong (say the login button wasn’t found or the home screen didn’t appear), the test fails and GPT Driver provides a detailed log and screenshot at the failing step.
Integrate and Extend: With these smoke tests now running in GPT Driver, you can integrate the run into your CI pipeline (so it runs, for example, on every merge to master branch). You can also take advantage of GPT Driver’s features to extend the tests – maybe add an AI-generated exploratory step after login to wander through the app for a few extra interactions, or parameterize the login test to try multiple accounts. The key point is that you didn’t have to start from scratch – the core flow came straight from your existing tests, imported in minutes rather than days.
Through this example, it’s clear how GPT Driver dramatically accelerates the onboarding of existing test suites. A process that would traditionally involve weeks of manual rewriting (and plenty of opportunities for error) is handled largely by the tool’s import engine and AI understanding of test steps.
Key Takeaways
Importing test cases from Xray or spreadsheets into GPT Driver enables teams to kickstart automation using their proven manual tests. Instead of wasting time and risking inconsistencies with hand conversion, GPT Driver’s import feature directly translates your test repository into executable AI-driven tests. This not only preserves the investment in your test documentation but also ensures that manual and automated test definitions stay aligned.
Traditional migration workflows required significant effort or custom tooling, often yielding brittle results. In contrast, GPT Driver provides a unified platform where test cases live alongside their automation logic. With features like natural language step mapping and integration with device clouds, it eliminates the gap between test case management and test execution. Teams can maintain one source of truth for tests and run them anywhere – locally, on real devices, or in CI/CD – with minimal friction.
In summary, importing your existing Xray or Excel test cases into GPT Driver can save enormous time and reduce errors. It tackles the industry challenge head-on: bridging the world of manual test case writing and the world of automated testing. By automating the migration, QA teams can focus on expanding coverage and improving test quality rather than retracing old steps. If you’re evaluating GPT Driver, a great next step is to try importing a small batch of your tests and seeing them run – it’s often a “wow” moment for teams to watch their formerly manual tests come to life. Embracing this capability helps ensure that your move to GPT Driver (or any modern automation framework) brings your whole test suite along, not just new tests. That way, you accelerate automation adoption with confidence and keep your QA process cohesive from day one.


