CIS103 ePortfolio (Final)
Videocast: https://youtu.be/BbxTv5GSOac
Podcast: https://youtu.be/z2X3NyJndxA
Blog: https://www.theniceguys.co/blog
Reflection:
The class fine‑tuned my concept of social media as an ecology where ethics, law, platform design, and actual implementation overlap. Lessons on digital hygiene, boundaries, and security updated my habits: pause before publishing, separate personal and professional identities, enable MFA, audit privacy. The same approach applied to more complex subjects such as data minimization, role‑based permissions, and disclosure practices which I integrated into the Ozone Visuals ethics guide via tiered ramifications, approval workflows, and anti‑workaround measures. The throughline was intentionality: design systems that plan for human error, clarify responsibility, and set the ethical path as the path of least resistance.
The legal landscape modules helped me make compliance part of the creative process. Discussions about copyright, licensing, and fair use shed light on why commercial posts need proper music sync rights—and why “personal” licenses granted by platforms do not extend to brands. FTC disclosure rules on material connections became actionable in my storyboarding and policy work, where I focused on explicit, front‑loaded labels (“Ad,” “Paid Partnership”), caption accuracy, and consent. Events of the moment (DSA transparency, platform moderation, and deepfake harms) expanded my thinking about recommender accountability and risk assessments, instilling the habit of building campaigns that value safety basics, truthful privacy claims, and accessible, documented workflows.
Practically, the class distilled these ideas into resources and methods I can implement right away. The videocast storyboard mapped out strategies for agile testing, prioritizing human reactions over automation, service‑as‑content, and value metrics (saves, shares, CTR, comment quality) to look beyond vanity likes. My RSS feeds (tech news, academic research, industry blogs) and platform plan (Discord, Reddit, LinkedIn, TikTok, Shopify) provided a replicable framework for self‑learning and distribution with an eye to moderation constraints and commerce integration. I am also incorporating safeguards into my rapid deployment approach: small, iterative tests, explicit approvals, consent and caption checklists, and postmortems to learn from failure. I am leaving the course with a portable framework: think ethically, measure what matters, design for compliance, and keep it human.
Best Practices Paper:
Ozone Visuals Social Media Ethics and Best Practices Guide
I. Introduction
Ozone Visuals is a creative marketing and video production company that works in the public eye. Brand awareness, client trust, and legal liability are earned and lost every day on social media. The speed and reach of these tools enable faster outreach and community growth, but the risks of unethical conduct multiply as well. Ozone Visuals must balance swift execution with the potential for ethical missteps, including premature leaks of client assets, insufficient transparency in paid partnerships, careless use of audience data, or entanglement of personal and professional voices. Research‑based guidance shows that social media policy must be well defined and assigned to specific roles, with accountability and support mechanisms to prevent violations (American Psychological Association [APA], 2021; Ventola, 2014). This ethics and best practices guide operationalizes actionable, evidence‑based best practices for the distinct roles of managers and non‑management employees at Ozone Visuals working with clients, contractors, and social platforms. The guide provides clear standards of appropriate conduct in typical scenarios, a proportionate and consistent tiered structure of consequences for violations, and a detailed communication plan that anticipates and mitigates workarounds to embed ethics into day‑to‑day processes (Bélanger & Crossler, 2011; Ventola, 2014).
II. Ethically Challenging Situations and Right Conduct
A. Managers (Directors, Producers, Account Leads, and Social Strategists)
Managers at Ozone Visuals have frequent access to embargoed client assets including unreleased product images, campaign concepts, and proprietary data. Ethical dilemmas around early reveals are high‑risk; a casual post, text, or share of a draft reel or work portfolio might unintentionally reveal confidential content and erode trust with a client. The right course of action in these situations is to treat all pre‑launch content as confidential and refrain from posting until a written client release or contractual clause authorizes public disclosure. Managers should access only approved assets from library folders, watermark drafts with “internal only,” and create platform‑specific access controls with private links and shared team drives with a clear access log. If in doubt, the manager should consult with the client account lead and reference the master content calendar and approval list prior to posting. This approach applies the widely cited ethical principles of confidentiality and non‑disclosure to personal content to the fast pace and high visibility of social media (APA, 2021; Ventola, 2014).
A second ethical scenario involves influencer marketing, sponsored posts, and paid partnerships. Relationships that include gifted product, payment, paid collaborations, or affiliate marketing links can mislead audiences and violate platform or regulatory norms. The manager must ensure clear, conspicuous disclosure of any material relationship at the beginning of a caption, in on‑screen text or stickers for short form video, and explicitly label the post as a “Paid Partnership,” “Ad,” or “Gift.” Campaign briefs should include standard language on disclosure practices, and the manager should proof post samples for compliance using the partnership tools whenever possible on a platform instead of hashtag disclosures. Honest and transparent disclosure maintains audience trust and complies with professional guidance on advertising and digital representation (Ventola, 2014; APA, 2021).
A third ethical situation concerns data privacy and audience targeting. Managers have privileged oversight of analytics dashboards, lookalike audiences, custom audiences, and remarketing lists that can be subject to over‑collection or insecure handling of personal data. Ethical behavior in this area requires data minimization to collect only what is necessary to the campaign goal, secure storage of files in access‑restricted tools and folders, and anonymization when possible. Audience segmentation and targeting should be non‑discriminatory or avoid tactics and attributes that prey on vulnerable populations; all data sources should be vetted against platform policies and internal testing for fairness. Clients should receive clear summaries of data collection and permissions, and request for opt‑out or deletion should be acted on as quickly as possible. Studies of privacy in information systems have long recognized both the potential and the perils of social media data, and advocates for social media use in professional contexts have placed strong emphasis on the ethical obligations of individual companies to protect personal information (Bélanger & Crossler, 2011; Ventola, 2014).
B. Non‑Management Employees (Editors, Designers, Coordinators, Community Managers)
A common ethical situation for non‑management employees is separation of personal and professional identity online. Posts on personal accounts about work‑adjacent interests can be attributed to the company or clients, and employees should protect themselves and the company against misunderstandings of personal views as official or anticipated statements by the company. Social media ethics in this context requires that employees use clear disclaimers such as “Views are my own” when writing about work‑related topics or industry news without revealing client information, campaign strategy, or nonpublic data. Personal posts or photos should not include client logos, internal footage, or set extensions without written permission, and employees should never enter into arguments on brand accounts but escalate problematic exchanges to a manager for resolution. These norms of transparency and role‑boundary clarity align with guidance on preventing conflicts of dual or multiple relationships in the public sphere (APA, 2021).
Another frequent ethical scenario for non‑management employees is copyright and music licensing. Draft videos that use unlicensed music, stock, or imagery may pass moderation only to be demonetized or taken down later, while patterned infringement can lead to legal claims. To avoid violations, employees should use only approved assets with explicit rights, such as library assets with license receipts or client‑provided materials with defined usage permissions. Project folders should have a visible reference to licenses, expiration dates, and geographic or platform‑specific use, and any user‑generated content should be obtained with written permission and credited to the creator as agreed. Respect for copyrights and honest representation are both foundational to ethical professional practice and required by most social platforms (Ventola, 2014).
The final ethical scenario for non‑management staff is community moderation and management. Public channels have no barrier to entry for commenters, and staff who moderate comments can be exposed to harmful content or accused of bias or undue censorship in enforcing policies. Moderation should follow a written rubric that specifies removal of hate speech, doxxing, and direct threats while maintaining posts with good‑faith critique. Non‑management employees should not give medical, legal, or financial advice, and in the case of disputed or misunderstood information related to a client’s products they should rely on approved facts and escalate to a manager. All removals and escalations should be logged in a moderation log. Guidance for ethical community moderation in public channels has called out the need for engagement without undue harm and consistency with organizational values (Ventola, 2014; APA, 2021).
III. Consequences for Violations (Proportional and Consistent)
The following tiered structure of consequences is aligned with ethics governance literature on transparency, consistency, proportionality, and deterrence (APA, 2021; Ventola, 2014). Minor violations are unintentional first‑time offenses with minimal risk, such as omitting a required disclosure tag on a draft caught before publishing or omitting a license receipt in a project folder for low‑risk stock images. The appropriate consequence for these cases is documented coaching, immediate correction within 24 hours, and completion of a refresher training module. Moderate violations are policy breaches with reputational, competitive, or compliance risk and no willful misconduct. Posting from a personal account to a brand channel without an appropriate disclaimer, delaying the removal of harassing comments despite the moderation rubric, or uploading a licensed asset to a non‑permitted platform would all be moderate violations. These infractions should result in a written warning, completion of mandatory training, temporary posting suspension, and notification of the client if they are impacted. Major violations are willful or repeated misconduct with significant client or third party harm, such as publishing embargoed client content, concealing the material relationship in paid posts, mishandling private audience data leading to exposure, or posting discriminatory content on a public channel. These severe policy breaches should be escalated to a termination decision or final warning depending on the severity, account removal, formal incident review, and mandatory remediation including transparent and timely client communication if necessary. All mitigating factors including context, self‑reporting, rapid remediation, and ambiguous instructions, and aggravating factors including willfulness, prior warnings, breadth of harm, and liability should be considered in adjudication. The structure and examples should be well documented to ensure consistent enforcement, reduce discretion and bias, and support defensible governance decisions (APA, 2021).
IV. Communication, Accountability, and Anti‑Workaround Measures
The success of this guide is determined by how it is communicated, where it is accessible, and how accountability is maintained and enforced. Ozone Visuals will develop and publish a dedicated Ethics & Social Media microsite to the company intranet with a searchable repository of this guide, a standard disclosure rubric, asset use matrix, moderation procedures, and data handling standard operating procedures. Each campaign brief will include a “Compliance Panel” with a summary of disclosure requirements, license sheet references, and release dates, with checklists integrated into the production workflow to signal required ethics considerations at each step. All onboarding training will include specific education on social media ethics, and semiannual refresher training will use case simulations and highlight platform policy changes to keep staff up to date (Ventola, 2014).
Leadership accountability is distributed among defined roles. The Head of Social & Content is the policy owner with responsibility for annual policy updates and formal audits, account leads manage client‑specific approvals and embargo controls, the Data & Compliance Lead will perform privacy reviews and data minimization checks, and the Community Manager Lead will supervise moderation practices and incident logging. Audits will be performed every quarter on randomized samples of posts for accuracy of disclosure, asset license documentation, and adherence to brand safety policies; monthly on moderation and takedown logs; and periodically on least privilege access for client assets and analytics. This approach to controls and privacy monitoring is well‑supported in information systems literature (Bélanger & Crossler, 2011).
Workarounds must be anticipated and blocked with technical, procedural, and culture‑building controls. Technical controls include role‑based permissions in all scheduling platforms, pre‑publish approval queues for high‑risk posts, watermarking and “internal only” overlays on unreleased assets, single sign‑on with multi‑factor authentication for all brand accounts, and automatic session timeouts on shared devices. Procedurally, the company prohibits posting to brand accounts from personal devices except through company‑managed profiles, documents an emergency backup posting playbook with communication and approval steps, and enforces a two‑person rule for access to embargoed content where a preparer and approver must sign off prior to scheduling. Culture measures include an anonymous reporting channel, strict nonretaliation policy, and post incident debriefs designed to promote learning rather than to assign blame unless willful misconduct is established (APA, 2021).
Ethics violations should be handled through a formal intake and triage process. Reporting can be initiated via the Ethics Hub or a line manager to start a time‑stamped case that automatically notifies the relevant leads. Investigation of cases will include evidence collection (URLs, screenshots, logs) and interviews to map the findings to policy. The case outcome will be documented with assigned consequences from the tiered structure above, as well as with corrective actions including transparent public correction posts, client notification, or platform appeals. To drive continuous learning, all anonymized cases will be added to the training library and standard operating procedures will be updated with lessons and preventive controls to address any gaps (APA, 2021; Ventola, 2014).
V. Implementation Timeline and Continuous Improvement
The first month of implementation will include publication of this guide, role‑based workshops, and configuration of any necessary platform approval flows. In the second and third months the company will complete the first audit and begin collection of disclosure compliance, license documentation, and community moderation service level metrics. Thereafter, this guide will be updated semiannually to incorporate platform and regulatory changes, and targeted micro trainings will be deployed for any audit patterns or emerging risks. This continuous improvement loop is drawn from privacy and control research in information systems that reinforces ethical practice as an ongoing organizational capability rather than a one‑time policy deployment (Bélanger & Crossler, 2011).
VI. Conclusion
Ethical use of social media is a core capability of Ozone Visuals. By defining specific scenarios for managers and non‑management employees, proportional consequences, and putting in place communication and accountability practices, this policy protects clients, audiences, and staff and supports trust and performance. The guide above operationalizes evidence based guidance on confidentiality, transparency, privacy, and professionalism and translate those principles to the realities of content production and marketing campaigns (APA, 2021; Bélanger & Crossler, 2011; Ventola, 2014). With consistent enforcement, auditing, and continuous process improvement, Ozone Visuals can produce engaging social media content that is not only effective but also ethically responsible.
References
American Psychological Association. (2021). APA guidelines for the optimal use of social media in professional psychological practice . https://www.apa.org/about/policy/guidelines-optimal-use-social-media.pdf
Bélanger, F., & Crossler, R. E. (2011). Privacy in the digital age: A review of information privacy research in information systems. MIS Quarterly, 35 (4), 1017–1042. https://www.jstor.org/stable/41409971
Ventola, C. L. (2014). Social media and health care professionals: Benefits, risks, and best practices. P&T, 39 (7), 491–520. https://pmc.ncbi.nlm.nih.gov/articles/PMC4103576/