You are using an unsupported browser. Please update your browser to the latest version on or before July 31, 2020.
close
You are viewing the article in preview mode. It is not live at the moment.
Home > Policies > Nixon Medical AI Policy
Nixon Medical AI Policy
print icon

Artificial Intelligence (AI) Policy

 

Scope

The use of Artificial Intelligence (AI) tools presents new challenges in terms of information security, data protection, and responsible use. This policy serves as a guide for associates on how to safely and effectively use AI tools while protecting company and customer data.

 

What are AI tools?

AI tools allow users to enter prompts and receive generated, gathered, or transformed responses. Many of these tools are conversational and may absorb the information provided by users into their systems. If data is improperly shared, it may become accessible to other users or third parties, resulting in data leakage.

 

AI tools are powerful for analyzing information, automating tasks, and brainstorming ideas, but they can also generate false or misleading outputs. Associates must therefore use AI responsibly and critically.

 

Purpose

The purpose of this policy is to ensure that associates use AI tools in a secure, responsible, and confidential manner. The policy outlines the requirements that associates must follow when using AI tools, including evaluating security risks, protecting sensitive information, and ensuring the accuracy of results.

 

Policy Statement

Our organization recognizes both the opportunities and risks of AI tools. We are committed to protecting the confidentiality, integrity, and availability of company and customer data while enabling safe and innovative use of AI. All associates are required to follow this policy and our broader security practices when working with AI tools.

 

Security Best Practices

 

1. Evaluation of AI tools

- Associates must evaluate the security of any AI tool before use.

- This includes reviewing the tool’s privacy policies, terms of service, and security practices.

- The reputation of the tool developer and any third-party services must also be considered.

- Associates are encouraged to involve IT during evaluation to ensure appropriate safeguards are in place.

 

2. Protection of confidential data

- Associates must not upload or share data that is confidential, proprietary, or customer-related without prior approval from executive leadership.

- Personally identifiable information (PII) such as names, addresses, phone numbers, emails, account numbers, government IDs, or combinations such as CUSTOMERNAME + CUSTOMERNUMBER are strictly prohibited.

- Financial information, contracts, HR records, pricing strategies, and source code must never be shared with AI tools.

- ABS Data Exception: Associates may upload anonymized ABS data as long as it does not include customer-identifiable information. If customer identifiers are present, they must be replaced with placeholders before uploading.

 

3. Access control

- Associates must not provide AI tool access outside of the company without prior leadership approval.

- Sharing login credentials, tokens, or other sensitive access information is strictly prohibited.

 

4. Compliance with security policies

- AI tool usage must align with existing company security standards.

- This includes using strong passwords, applying timely software updates, and following the Computer Acceptable Use Policy.

 

5. Data privacy considerations

- Before sharing any information with an AI tool, associates must ask:

- “Would I be comfortable sharing this publicly?”

- “Would it be acceptable if this information were leaked?”

- If the answer is “no,” the data must not be uploaded.

- When uncertain, associates must consult IT or executive leadership before proceeding.

 

6. Safe use examples

- Allowed: Summarizing internal meeting notes that contain no customer identifiers, drafting marketing content, brainstorming names or campaigns, or uploading anonymized ABS data.

- Not Allowed: Uploading customer service transcripts, financial information, agreements, HR documents, or source code into AI tools.

 

7. Accuracy and bias

- Associates must recognize that AI outputs may contain inaccuracies, hallucinations, or biases.

- AI-generated content must always be fact-checked, verified, and placed in context before being used for business decisions or customer communications.

- Responsibility for accuracy rests with the associate, not the AI system.

 

Enforcement

Violations of this policy may result in disciplinary action, up to and including termination. Serious violations may also lead to legal consequences if they involve regulatory non-compliance or breaches of customer trust.

 

Review

This policy will be reviewed annually, or sooner if significant changes in AI technology, regulation, or company operations occur. Updates will be communicated to all associates to ensure ongoing compliance and awareness.

Feedback
1 out of 1 found this helpful

Attachments

Nixon_Medical_AI_Policy_-_Updated.docx
scroll to top icon