By  Meagan Gentry / 21 Mar 2023 / Topics: Data and AI Application development Featured
To break out of the mold, our Artificial Intelligence (AI) team at Insight uses a handful of high-impact practices to encourage “out-of-the-box” thinking to solve unique challenges for our partners and clients in novel and effective ways. Here are four ways we unlock creative problem-solving.
Creative solutions take time to cultivate. Our Innovate@Insight program is a framework we use to allocate time and resources to credit and celebrate inventions from all Insight teammates, at every level. Innovate@Insight offers a psychologically safe environment to share and test new ideas to a network of industry and legal experts who help inventors capture their intellectual property and apply for patents. Each week, our teammates meet to seek and provide support to novel approaches that often turn into repeatable solutions for clients.
When we develop AI solutions for a given industry, we sometimes forget the value of the lessons we’ve learned from applied AI in other industry verticals. Coming together as a team can revitalise those transferable use cases.
For example, we created a computer vision solution for one of the world’s largest brewing companies to improve manufacturing line operations and product quality by spotting misplaced labels on cans and bottles in real time. Using the same computer vision technology, we partnered with a U.S. nonprofit academic medical center to detect spinal fractures from x-rays.
Another example: The forecasting techniques traditionally used to predict downturns in stock prices are the same techniques that are successful at preemptively warning a driver their tire pressure is critically low.
Recently, our team applied machine learning to optimise staffing for improved airport baggage transport — but the same automation system could be plugged into several other problems in other verticals, such as barista planning for optimal coffee order fulfillment, or customer tech support allocation for optimal ticket resolution.
We owe this cross-industry innovation to the diversity of our team’s professional background in both data science and a variety of verticals.
As data scientists, we’re thrilled when we identify opportunity to use the latest and greatest in AI technology. However, directing too much focus to technology is a common mistake that can prevent us from understanding the business problem, which is a human experience problem at its core.
At Insight, we start with design-thinking workshops with our clients to identify the root user experience needs.
One workshop we conducted with a commercial construction components manufacturer started with a seemingly straightforward but critical problem definition: improve time to shipment of finished goods. After only one day of dedicated focus on interviewing and empathising with over 20 roles across the organisation, we identified root causes for the time-to-shipment problem. This enabled us to define 10 refined problem statements that were actionable with small, targeted investments in AI and the Internet of Things (IoT), including novel technology combinations for product tracking, tools for human resource management and real-time analytics enabled on the plant floor.
Not only do we arrive at more effective innovation with a human-centric approach, we’ve found that data science projects often fail when we neglect this step.
When AI experts begin solutioning, we often start with the question, “Can we solve this problem with AI?”
Because Insight is committed to upholding responsible AI practices, we challenge the question, “Can we?” with a more important question: “Should we?”
The use of AI in decision-making raises important ethical questions, such as how to ensure AI systems are fair and unbiased and how to protect people's safety, privacy and security.
Thinking outside the box is sometimes an exercise in knowing when to use AI and when to lean on alternative methods for decision support.
For example, mortgage loan approvals are a use case for AI, but historical data often introduces discriminatory biases that are difficult to systematically detect in AI models. Instead of solutioning a stand-alone AI model, we might instead engineer a “human-in-the-loop” step in the final decision support system so that a loan officer can observe the historic factors that influence the AI-produced approval or denial. Even state-of-the-art AI models may lack the transparency or sophistication needed to protect us from unfavorable outcomes.
Data and AI Application development Tech Journal View all focus areas