Skip to Main Content
UC Logo
Libraries | Ask the Libraries

Using AI in Research

A student guide to using free(mium) AI in their research and writing process.

Tool Evaluation

Inspect the Tool Itself

  • Identifying the purpose of the AI tool. What task was it designed to do?
  • Investigate information published by the creator of the tool. Are their training, testing, and validation methods publicly available?
  • Evaluate the tool’s usage and privacy policies. How will the creators of the tool use the data that you upload to their service? Will your data be used to train future tools? If you're uploading others' works to an AI tool, will their privacy be respected?
  • Has the University of Cincinnati reviewed the tool for security and privacy?

Investigate the Training Data

  • If available, assess the data used to train and test the AI tool. Is it relevant to the task the AI is designed to perform? Is the training data a comprehensive representation of the issue?
  • What bias might be introduced by the training data? Do the creators of the tool discuss bias in their own documentation? Do they specify how their training methods attempt to address and reduce bias in outputs?

Evaluate the Output

  • Check to see if a third party has evaluated the reliability of a tool. Creators of AI tools will often release information as marketing for their tool, which can obscure vital information that helps evaluate reliability. Third-party testing is important for ensuring a well-rounded evaluation.
  • Finally, evaluate the output yourself. Find more information this below.

Adopted from Evaluating AI - Using and Evaluating AI Tools - LibGuides at California State University Dominguez Hills

Response Evaluation

To avoid hallucinations, incorrect information, and bias, you should always evaluate the AI output. Here are some questions you can ask yourself:

  • Does the tool explain its output? Does it cite sources or provide a justification for its decision?
    • Can claims be verified in reliable, credible sources that cover the same topic?
    • If the output if a citation, does the source truly exist? AI hallucinations happen!
    • If the AI provides real citations or links, how does the information in the links compare to what the AI said?
  • Could the content be missing any important information or points of view? Is there any inherent bias?
    • Consider information unavailable to the AI tool. When was the last time the AI updated its data source? If you’re using an AI to make decisions, does the decision made by the AI tool fit with all the information available to you.
  • If something seems off, ask yourself: Does the information follow a clear and logical structure? Does it provide relevant examples? Does it answer my question in a meaningful way? If an AI-generated response feels vague or overly generic, try rewording your prompt or breaking it down into smaller, more specific questions.

Source Evaulation: SIFT

The SIFT method is a way to evaluate information, but it can also be used to evaluate AI outputs. The overview video below was made by another library. If you have more questions about the SIFT method or evaluating source, please reach out to UC Libraries.

University of Cincinnati Libraries

PO Box 210033 Cincinnati, Ohio 45221-0033

Phone: 513-556-1424

Contact Us | Staff Directory

University of Cincinnati

Alerts | Clery and HEOA Notice | Notice of Non-Discrimination | eAccessibility Concern | Privacy Statement | Copyright Information

© 2021 University of Cincinnati