Generative AI tools such as ChatGPT are today revolutionizing the manner in which content is generated, customer interactions are managed and other menial tasks are completed. However, using these types of software also raises concerns about data security and the risk of data leakage, due to its propensity to collect and store massive amounts of data. Completely restricting the use of AI tools is also not practical due to the significant advantages it brings about in terms of productivity and quality of output. Therefore, the obvious solution lies in developing means of mitigating risks without diminishing the advantages of using AI.
Adhering to Zero Trust architecture standards and ensuring all customer data stays within their enterprise domain is one way of eliminating the risk of this data being used to train public LLMs. Utilizing the premium features offered by Azure Active Directory, which provides role-based access control, and Azure Open AI services that provide API access to Generative AI models grants greater visibility, control and security over data.
H-One also focuses on developing cloud-native solutions fully utilising production-grade Azure Open AI services, which enables speed, scalability and cost saving. It prioritizes solution modernization and avoids bespoke solutions, ensuring a standardized core solution that meets evolving business needs across various verticals. This approach enables businesses to adapt to changing requirements effectively and accomplish BAU (Business As Usual) work more productively.
In terms of data security, H-One’s private ChatGPT solutions address compliance concerns. By leveraging the Microsoft Azure system and the Microsoft-OpenAI partnership, it offers enhanced security measures, protecting information from external exposure. H-One’s solution operates within the customers’ tenant, with access restricted to authorized personnel, ensuring that sensitive data remains protected. It integrates company policies to guide the use of private ChatGPT, restricting access to work-related purposes and maintaining accountability through logging. Overall, H-One’s partnership with Microsoft and its commitment to data security, scalability, and practical solutions enable businesses to leverage AI effectively while maintaining robust data protection measures.
How do Language Learning Models drive digital transformation for businesses?
Dr Ranawana: Large language models have significantly transformed our interactions with technology by enabling natural language interactions, eliminating the need for coding or complex user interfaces. This breakthrough simplifies the user experience and has the potential to reshape future interactions with technology
Furthermore, large language models have the power to revolutionize the way we work due to their ability to enhance productivity via automation. Automating time-consuming and repetitive tasks such as data entry, report generation, invoicing and using chatbots for customer support, will allow employees to focus on more critical aspects of their job. This not only reduces the chances of human error and brings about significant cost and time savings but also leads to enhanced productivity, thus improving the overall performance of the business.
A recent McKinsey report highlights the immense value that large language models and generative AI systems can bring to the global economy, estimating an additional value of $2.6 to $4.4 trillion. This value stems from productivity improvements, cost savings, and new revenue opportunities.
Productivity improvements are a key advantage of LLMs. Tasks that used to take days can now be completed in minutes. Automation reduces the need for manual intervention, leading to significant cost savings. But the creative inputs for what is needed in the end output are shaped by human interaction with the LLM. It stops employees from distracting from tedious tasks contributing to more productivity.
Additionally, LLMs create new revenue opportunities. Tools like ChatGPT enable mid-journey engagement, video and art generation, and other applications, opening up previously untapped sources of income. The potential for revenue growth is substantial.
The impact of large language models extends across various sectors and industries, enhancing productivity, reducing costs, and generating new revenue streams. Companies can transform their operations, improve customer interactions, optimize marketing and sales efforts, and leverage data more effectively. From personalized product discovery to lead prioritization and SEO optimization, large language models offer diverse solutions.
Moreover, these models are revolutionizing software engineering by transforming code writing and documentation practices in tools like GitHub Copilot. They address the significant challenges that consume engineers’ time, assisting in code development, testing, and workflow automation, ultimately improving software product quality and efficiency.
It is important to recognize that we are still in the early stages of this revolution. The potential of large language models is vast and extends across industries and job roles. As technology advances further, its impact will continue to grow, unveiling new possibilities and reshaping our interactions with technology in ways that are yet to be fully understood. Dialog has already formed a dedicated team to develop applications on top of ChatGPT and a few applications are already up and running. (Eg: Dialog GPT, HR Chatbot, Customer Service Knowledge Base)
How can a private version of ChatGPT enhance a client’s internal communications and efficiency?
Dr Ranawana: Using technologies like ChatGPT can significantly enhance productivity in companies, but it also introduces security risks. Inputting confidential information into the ChatGPT portal may result in OpenAI collecting and incorporating that data, potentially exposing sensitive company information in future interactions with unknown parties. Balancing productivity benefits with security concerns poses a challenge for many organizations.
To address this issue, Dialog has developed a private ChatGPT solution that leverages Azure Open API Services. This solution utilizes powerful OpenAI models while ensuring data security through a highly secure API. By utilizing the Microsoft platform, there is an assurance that any data sent into the models will not be collected by Microsoft or OpenAI. Dialog’s private ChatGPT portal provides control over the privacy of private and confidential information.
Another risk associated with systems like ChatGPT is their potential misuse for illegal purposes. Inappropriate use of ChatGPT by employees during work hours may hold the company liable for their actions. To mitigate this risk, Dialog’s private ChatGPT solution operates within a secure and private internal portal where data is not collected by any party. Access to the system is restricted to authorized personnel through company login accounts, ensuring that only approved individuals can use it.
Additionally, Dialog has integrated a corporate AI policy that prohibits staff from using the public version of ChatGPT for official work. This policy enforces the use of the internal system and ensures that questions and answers align with work-related purposes. Similar to blocking social media or personal email in offices, Dialog has engineered the prompts in the private ChatGPT to restrict usage to work-related matters. This approach enhances security and guides staff to use the system appropriately.
Accountability and traceability are maintained through comprehensive logging of every question and answer in the private ChatGPT system. This provides oversight and enables tracking of activities performed on the system. These measures collectively mitigate risks associated with system usage while recognizing the importance of providing access to technology for enhanced productivity.
Finding a middle ground that allows staff to leverage such technologies while maintaining control and security is crucial. Depriving employees of these tools while other companies adopt them may result in lost competitive advantages and productivity gains. Dialog’s private ChatGPT solution offers a controlled environment that strikes a balance between technology access and security needs.
With the implementation of advanced technologies like the private version of ChatGPT, will there be any concerns about potential job displacement or changes in job roles for employees in organizations?
Dr Ranawana: My perspective is that most jobs won’t be completely replaced by technology. Instead, those jobs will be taken over by individuals who effectively utilize technology. Traditional roles, which currently rely on manual processes, will evolve to incorporate technology as an enhancement. The nature of jobs will change, but they won’t be entirely replaced.
The key lies in the skill set required for these jobs, as it will fundamentally change in the coming years due to productivity improvements offered by tools like AI. Job displacement will only occur if individuals fail to adapt and upskill themselves. Just as knowing Microsoft Word, Excel, and PowerPoint became essential for many jobs, the same transition will occur with generative AI tools and the latest AI advancements. Those who are proficient in leveraging these technologies will outperform those who are not.
We can anticipate a similar transformation to what we witnessed during the digitization era and the introduction of Microsoft Office over the past two decades. Most roles will experience a comparable shift in the next 10-15 years due to the rise of AI. Jobs won’t disappear entirely, but their requirements and skill components will change significantly.
Ultimately, job roles will incorporate rate a larger technology component, and those who can effectively harness this technology will have a competitive edge. It will be similar to the contrast between individuals who excel in using PowerPoint, Excel, and Word compared to those who don’t. The productivity difference will be further amplified by advancements in AI.
What steps does H-One take to ensure the scalability and robustness of these digital solutions, especially when it involves handling evolving business needs?
Mendis: To ensure the security, compliance, scalability and robustness of our digital solutions in handling evolving business needs, H-One takes several steps. First and foremost, we adhere to Microsoft’s best practices for security and compliance, leveraging an investment of over $1bn made annually by Microsoft. The zero-touch approach ensures that security is maintained at all levels, with a focus on data in transit and data at rest. We follow a cloud application development framework to guarantee best practices.
Authentication and authorization are key aspects of our solutions, and we leverage the premium features of Azure Active Directory for this purpose. This includes implementing multi-factor authentication, which adds an extra layer of security. We also prioritize data classification to grant appropriate access rights to the right users. This minimizes the risk of unauthorized data access. Furthermore, we maintain comprehensive audit trails to meet security and compliance requirements.
As an entity that supports customers on their cloud migration journey, H-One’s solutions are cloud-native, providing robust and scalable offerings. We emphasize solution modernization and the journey to the cloud, as the first stage in consuming AI services. This not only enhances scalability but also helps reduce operational costs. Developing and training a large language model requires millions of dollars of computer power. Utilizing Azure OpenAI services we can get access to this same capability for a fraction of the cost.
Taking a standards-based approach towards solution development, even utilizing AI-driven tools such as Github co-pilot, we develop robust solutions fast, that are very agile and able to rapidly iterate and create domain-specific offerings. This approach allows us to adapt to evolving business needs effectively and efficiently.
Regarding the use of OpenAI, H-One recognizes the potential of machine learning and AI and has embraced the utilization of OpenAI for enhancing efficiency and driving productivity. The fastest and most effective means of doing this is by leveraging the complete toolset from Microsoft Cognitive Services. In the case of publicly available LLMs, it is important to note that certain information may be captured and used for training. However, in our solutions, data security, compliance and privacy, especially for sensitive information, are always prioritized.
We introduce the Azure OpenAI APIs to clients and offer tailored services built upon the Azure OpenAI framework. We work closely with Dr Romesh, our Chief Analytics Officer, who has been instrumental in implementing ChatGPT across various applications, including managing internal Q&A portals, the HR portal being one example. By automating the Q&A process, we improve efficiency and reduce reliance on the HR team for routine inquiries. Throughout this process, we ensure that data security concerns are addressed, safeguarding sensitive information.
At H-One, we are committed to investing in technology and leveraging it to its fullest potential, while maintaining the highest standards of security and privacy.
In today’s data-driven world, how does H-One prioritize and implement robust data security measures to safeguard sensitive information for their clients?
Mendis: H-One has established a strong partnership with Microsoft as a Tier 1 partner. We have strong credentials in deploying Microsoft Security Stack and supporting customers in their journey to the cloud, allowing us to provide clients with a secure environment through Microsoft Azure OpenAI. As a trusted partner, H-One promotes and deploys Azure OpenAI solutions, offering clients the necessary infrastructure for implementation and relevant advisory services.
Drawing from our extensive experience and insights gained through client dialogues, and internal development within the Dialog group, we have developed a suite of products tailored to meet their needs. One such product is the internal version of ChatGPT, customized to suit the requirements of individual organisations and their own AI policy. For example, organizations may want to restrict certain content, such as sports-related information or cooking recipes, that are not relevant to their internal use. Through configuration, we can customize the solution and limit access to specific professional staff during office hours, empowering them to effectively utilize the OpenAI service.
Another service we provide is the training of Q&A engines based on internal documents and policies. If an organization desires to develop a question-and-answer engine, we can train the engine and offer it as a service. These are the two primary services we currently offer to cater to the unique requirements of our clients.
At H-One, we strive to deliver secure and tailored solutions through our partnership with Microsoft and our commitment to leveraging innovative technologies like Azure OpenAI. We address compliance concerns related to data security that arise when relying solely on OpenAI offerings. By leveraging the Microsoft OpenAI partnership, we implement enhanced security measures that protect information from external exposure, ensuring greater data protection
Furthermore, if organizations aim to develop their internal policies and documents while ensuring a secure product, H-One can assist in defining the best architecture, tailored to their specific needs. With our expertise and experience, we provide valuable insights to shape robust and secure AI solutions.
In addition to custom AI solution development, H-One can advise on off-the-shelf AI offerings from Microsoft. Recently announced Microsoft Inspire, Bing Enterprise Chat and a multitude of co-pilot offerings promise to massively boost the productivity of the Microsoft suite of products. This is perhaps the biggest disruption in our industry since the cloud.