How OpenAI Convinced Me to Trust It (by Socratic method)

OpenAI is an artificial intelligence research laboratory most famous for its recent release of ChatGPT. This AI language model is making waves and is attracting significant media attention to its AI-based solutions across multiple industries. While general reception has been positive, I was skeptical of its ability to report accurately and reliably in real-world applications. My skepticism was mainly due to previous experience with AI technology’s promise and OpenAI’s relative immaturity in the market. In my earlier exposure to AI products, I found they tend to be too specialized for general application and left me underwhelmed.

I wanted to determine if OpenAI could be utilized in planning for or integration within our products. These applications are especially relevant in our product agency operations, where we continually educate customers with fantastic domain knowledge about the nuance of geospatial and agricultural applications. Can OpenAI help us maximize customer value while minimizing the risk involved? To find out, I employed the Socratic method using OpenAI itself.

The Socratic method involves asking probing questions to elicit a deeper understanding and gain insight into a topic. I focused on the various OpenAI example applications available, including ChatGPT. I started by asking, “what are you capable of?”. This question is intentionally broad and open-ended since I wanted to assess the overall interpretation and the response. Most importantly, I could verify the answer using available documentation. The AI client explained its ability to be trained using custom imported models, analyze large data sets using statistical analysis and machine learning, and focus or limit answers based on customization through the available API. I tried, but I could not find any inaccuracies in the information provided. This was a nice start but is not particularly significant since the AI neural network should be capable of reporting its functionality and limitations accurately.

My questions quickly escalated to much more specific use cases, such as “If your model was trained using six months of oceanic measurements including information on weather, wave height, wave speed, wave direction, currents, wind speed, wind direction, barometric pressure, water temperature, salinity levels, and tide levels, could you predict what those same variables will be over the next several days?” We are all about environmental and geospatial data at Lofty.

The response was “yes”. Because I did not have six months’ worth of training data available to probe specific scenarios, I changed my approach. I narrowed in with questions such as “what oceanic conditions are important to track to determine the likelihood of incoming storms and safety when using smaller vessels that are unable to handle large waves or extreme conditions?” The answers I received were surprisingly robust and covered conditions I had not mentioned or defined such as water color and wave period. The AI seemed to understand the broader context of my question and helped me focus my approach.

As my questioning progressed, I found that each answer provided by ChatGPT was one I already knew to be true or one I could find supporting information about. However, convincing me isn’t easy, so I continued to expand my questions to other topics such as crop health and weather predictions. Yes, we are nerds about agriculture and sustainability. Yes, we think about it even when we aren’t at work. Yes, we are the cool kids.

The continued responses provided were similarly promising, and I was able to verify each through external resources. Despite my desire to be thoroughly unimpressed by this technology, I eventually had to yield to its potential. Through the method of asking questions and researching the answers, I was able to evaluate OpenAI’s ability to correctly interpret complex questions related to data prediction and find value in the tool.

It is important to note that the platform is not infallible, and truly testing predictive models requires real-life data and real-life data scientists. Information provided by OpenAI in this way still requires verification. ChatGPT and other AI models are a support, not an authority, and can fall short. If the input is poorly structured, you’ll find the old adage is as true as ever: “garbage in, garbage out.” If repeatedly pressed with inaccurate information presented as accurate, the model will eventually agree that something false is true, but that might be my next blog post.

More from Lofty