top of page

Safety and Ethics of Early Model 1

Our Top priority over anything else

As I myself being the Founder, who struggles with Mental Health issues, I learnt and personally understand 3 of the most important factors you want to know at most. 

Privacy and confidentiality

Early Model 1 deletes conversations based on your request and does not require highly sensitive information to operate. All data is visible only by you

Clarity about core systems in our solution

Early Model 1 relies on a new core system of using Therapeutic expert knowledge as the centre brain of the AI.

Data Security and protection

Early Model 1 comes with encryption security, minimal data collection and complies with Privacy standards advised by experts

So let's explore further:

Privacy and Confidentiality

What are the common mistakes we see?

1

Many apps collect lots of highly sensitive data such as Medical records, complex conditions etc, often storing it long-term. This creates massive risks to your privacy in case of breaches

2

Many app developers have vague privacy policies, which have been known to share private user data with advertisers or partners, (third party sharing

3

Many apps ask for your consent to access features in their apps but it isn't explained why or how it will be used? Thus leaving you in uncertainty. 

What we do different?

1

We only collect the data absolutely necessary for app functionality, such as login credentials or basic tracking metrics (Emotion tracking, facial emotion expression, chat history etc) and no unnecessary personal information or session details are stored (unless you prefer to add on extra information to store on the system).

2

We have a clear, transparent privacy policy that is easy to understand. We explicitly state how user data will be used and emphasize that we never share your personal information with third parties. Our commitment to confidentiality ensures you don't need to burden your mind with others taking your information without consent

3

We are upfront about asking for consent. When users are asked to grant access, we provide a detailed explanation of why we need the data and how it will be used, ensuring full transparency. We also allow users to adjust their privacy settings easily at any time.

Clarity about core system functions

What are the common mistakes we see?

1

The Biggest question anyone asks is this.... how and where do you get your advice from? Many companies build AI technology without knowing the answer to this question.... 

2

As a company, we see AI Models being built using Generative AI only (the fast way) but Gen AI is simply the internet being the brains of their core, which if you know....

3

Who tells the internet this is right or wrong advice? How does the AI know it's right or wrong? And how does the internet filter harmful information?

Which was shocking to see how many providers couldn't answer those questions?

But we can:

1

From the start we have a Therapist on board to provide us information and knowledge to train our AI Models with. These are known as ''Scripts'' where we feed the AI to learn from different therapeutic solutions and roleplay situations that have worked and are ethically credible from the Therapist's knowledge directly.

Thus why we call ourselves Therapy driven!

2

Ensure that our solutions provide and utilitize information from licensed professionals and therapists than just rely on the internet with a history of unsafe, unvalidated and unethical data in place, impacting people's and your lives as a result 

3

Our AI is trained by developers with 2-7 years of experience to filter out harmful and dangerous information by limiting methods and using only successful treatment strategies used by our therapists as its core data. Ensuring users that the app provides systems that work and are safe to execute on.

Sign up for Early Model 1

Any questions or queries?

Thanks for submitting!

bottom of page