Microsoft powerpoint 2016 practice exercise 3 free -

Microsoft powerpoint 2016 practice exercise 3 free -

Looking for:

10 Best Microsoft PowerPoint Courses to Take in — Class Central. 













































     


Free PowerPoint Tutorial at GCFGlobal - Annual Report 2016



  will happen when you work through the Try This Yourself practice exercise. In. Reference topic sheets the screen shots and graphics are used to visually. advanced-microsoft-powerpoint-practice-exercises. 1/3 Microsoft PowerPoint Training Manual Classroom in a Book TeachUcomp Complete. Double click on the PowerPoint icon on the desktop to open the program. 2. PowerPoint will open with a single blank slide with the Title Slide selected. 3.    

 

Microsoft powerpoint 2016 practice exercise 3 free.Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms



   

The private and public sectors are increasingly turning to artificial intelligence AI systems and machine learning algorithms to automate simple and complex decision-making processes.

AI is also having an impact on democracy and governance as computerized systems are being deployed to improve accuracy and drive objectivity in government functions. The availability of massive data sets has made it easy to derive new insights through computers. As a result, algorithms, which are a set of step-by-step instructions that computers follow to perform a task, have become more sophisticated and pervasive tools for automated decision-making.

In the pre-algorithm world, humans and organizations made decisions in hiring, advertising, criminal sentencing, and lending. These decisions were often governed by federal, state, and local laws that regulated the decision-making processes in terms of fairness, transparency, and equity.

Today, some of these decisions are entirely made or influenced by machines whose scale and statistical rigor promise unprecedented efficiencies. Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals. From that training data, it then learns a model which can be applied to other people or objects and make predictions about what the correct outputs should be for them.

However, because machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups.

The exploration of the intended and unintended consequences of algorithms is both necessary and timely, particularly since current public policies may not be sufficient to identify, mitigate, and remedy consumer impacts.

With algorithms appearing in a variety of applications, we argue that operators and other concerned stakeholders must be diligent in proactively addressing factors which contribute to bias. Surfacing and responding to algorithmic bias upfront can potentially avert harmful impacts to users and heavy liabilities against the operators and creators of algorithms, including computer programmers, government, and industry leaders.

These actors comprise the audience for the series of mitigation proposals to be presented in this paper because they either build, license, distribute, or are tasked with regulating or legislating algorithmic decision-making to reduce discriminatory intent or effects. Our research presents a framework for algorithmic hygiene , which identifies some specific causes of biases and employs best practices to identify and mitigate them.

We also present a set of public policy recommendations, which promote the fair and ethical deployment of AI and machine learning technologies. This paper draws upon the insight of 40 thought leaders from across academic disciplines, industry sectors, and civil society organizations who participated in one of two roundtables. Our goal is to juxtapose the issues that computer programmers and industry leaders face when developing algorithms with the concerns of policymakers and civil society groups who assess their implications.

To balance the innovations of AI and machine learning algorithms with the protection of individual rights, we present a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies—all of which promote the fair and ethical deployment of these technologies. Our public policy recommendations include the updating of nondiscrimination and civil rights laws to apply to digital practices, the use of regulatory sandboxes to foster anti-bias experimentation, and safe harbors for using sensitive information to detect and mitigate biases.

We also outline a set of self-regulatory best practices, such as the development of a bias impact statement, inclusive design principles, and cross-functional work teams. Finally, we propose additional solutions focused on algorithmic literacy among users and formal feedback mechanisms to civil society groups. The next section provides five examples of algorithms to explain the causes and sources of their biases. Later in the paper, we discuss the trade-offs between fairness and accuracy in the mitigation of algorithmic bias, followed by a robust offering of self-regulatory best practices, public policy recommendations, and consumer-driven strategies for addressing online biases.

We conclude by highlighting the importance of proactively tackling the responsible and ethical use of machine learning and other automated decision-making tools. Algorithmic bias can manifest in several ways with varying degrees of consequences for the subject group. Consider the following examples, which illustrate both a range of causes and effects that either inadvertently apply different treatment to groups or deliberately generate a disparate impact on them.

Princeton University researchers used off-the-shelf machine learning AI software to analyze and link 2. If the learned associations of these algorithms were used as part of a search-engine ranking algorithm or to generate word suggestions as part of an auto-complete tool, it could have a cumulative effect of reinforcing racial and gender biases. Latanya Sweeney, Harvard researcher and former chief technology officer at the Federal Trade Commission FTC , found that online search queries for African-American names were more likely to return ads to that person from a service that renders arrest records, as compared to the ad results for white names.

MIT researcher Joy Buolamwini found that the algorithms powering three commercially available facial recognition software systems were failing to recognize darker-skinned complexions.

When the person in the photo was a white man, the software was accurate 99 percent of the time at identifying the person as male. The COMPAS Correctional Offender Management Profiling for Alternative Sanctions algorithm, which is used by judges to predict whether defendants should be detained or released on bail pending trial, was found to be biased against African-Americans, according to a report from ProPublica. Compared to whites who were equally likely to re-offend, African-Americans were more likely to be assigned a higher-risk score, resulting in longer periods of detention while awaiting trial.

While these examples of bias are not exhaustive, they suggest that these problems are empirical realities and not just theoretical concerns. They also illustrate how these outcomes emerge, and in some cases, without malicious intent by the creators or operators of the algorithm. Acknowledging the possibility and causes of bias is the first step in any mitigation approach.

Historical human biases are shaped by pervasive and often deeply embedded prejudices against certain groups, which can lead to their reproduction and amplification in computer models. If historical biases are factored into the model, it will make the same kinds of wrong judgments that people do. For example, African-Americans who are primarily the target for high-interest credit card options might find themselves clicking on this type of ad without realizing that they will continue to receive such predatory online suggestions.

In this and other cases, the algorithm may never accumulate counter-factual ad suggestions e. Thus, it is important for algorithm designers and operators to watch for such potential negative feedback loops that cause an algorithm to become increasingly biased over time.

Insufficient training data is another cause of algorithmic bias. If the data used to train the algorithm are more representative of some groups of people than others, the predictions from the model may also be systematically worse for unrepresented or under-representative groups.

That is, the algorithm presumably picked up on certain facial features, such as the distance between the eyes, the shape of the eyebrows and variations in facial skin shades, as ways to detect male and female faces. However, the facial features that were more representative in the training data were not as diverse and, therefore, less reliable to distinguish between complexions, even leading to a misidentification of darker-skinned females as males.

Turner Lee has argued that it is often the lack of diversity among the programmers designing the training sample which can lead to the under-representation of a particular group or specific physical attributes. Conversely, algorithms with too much data, or an over-representation, can skew the decision toward a particular result. Researchers at Georgetown Law School found that an estimated million American adults are in facial recognition networks used by law enforcement, and that African-Americans were more likely to be singled out primarily because of their over-representation in mug-shot databases.

Understanding the various causes of biases is the first step in the adoption of effective algorithmic hygiene. But, how can operators of algorithms assess whether their results are, indeed, biased? Even when flaws in the training data are corrected, the results may still be problematic because context matters during the bias detection phase.

In the former case, systemic bias against protected classes can lead to collective, disparate impacts , which may have a basis for legally cognizable harms, such as the denial of credit, online racial profiling, or massive surveillance. These problematic outcomes should lead to further discussion and awareness of how algorithms work in the handling of sensitive information, and the trade-offs around fairness and accuracy in the models. While it is intuitively appealing to think that an algorithm can be blind to sensitive attributes, this is not always the case.

For example, Amazon made a corporate decision to exclude certain neighborhoods from its same-day Prime delivery system. Their decision relied upon the following factors: whether a particular zip code had a sufficient number of Prime members, was near a warehouse, and had sufficient people willing to deliver to that zip code.

The results, even when unintended, discriminated against racial and ethnic minorities who were not included. There are also arguments that blinding the algorithm to sensitive attributes can cause algorithmic bias in some situations.

Thus, blinding the algorithm from any type of sensitive attribute may not solve bias. While roundtable participants were not in agreement on the use of online proxies in modeling, they largely agreed that operators of algorithms must be more transparent in their handling of sensitive information, especially if the potential proxy could itself be a legal classificatory harm.

When detecting bias, computer programmers normally examine the set of outputs that the algorithm produces to check for anomalous results. Comparing outcomes for different groups can be a useful first step. This could even be done through simulations. Roundtable participant Rich Caruana from Microsoft suggested that companies consider the simulation of predictions both true and false before applying them to real-life scenarios.

However, the downside of these approaches is that not all unequal outcomes are unfair. This may be unfortunate, but is it fair? One of which is not incarcerating one minority group disproportionately [as a result of an algorithm]. As shown in the debates around the COMPAS algorithm, even error rates are not a simple litmus test for biased algorithms. It is not possible, in general, to have equal error rates between groups for all the different error rates.

Thus, some principles need to be established for which error rates should be equalized in which situations in order to be fair. However, distinguishing between how the algorithm works with sensitive information and potential errors can be problematic for operators of algorithms, policymakers, and civil society groups. At the very least, there was agreement among roundtable participants that algorithms should not perpetuate historical inequities, and that more work needs to be done to address online discrimination.

Next, a discussion of trade-offs and ethics is needed. If the goal is to avoid reinforcing inequalities, what, then, should developers and operators of algorithms do to mitigate potential biases? We argue that developers of algorithms should first look for ways to reduce disparities between groups without sacrificing the overall performance of the model, especially whenever there appears to be a trade-off.

A handful of roundtable participants argued that opportunities exist for improving both fairness and accuracy in algorithms. For programmers, the investigation of apparent bugs in the software may reveal why the model was not maximizing for overall accuracy. The resolution of these bugs can then improve overall accuracy. Data sets, which may be under-representative of certain groups, may need additional training data to improve accuracy in the decision-making and reduce unfair results.

What is fundamentally behind these fairness and accuracy trade-offs should be discussions around ethical frameworks and potential guardrails for machine learning tasks and systems.

There are several ongoing and recent international and U. Their principles interpret fairness through the lenses of equal access, inclusive design processes, and equal treatment. Yet, even with these governmental efforts, it is still surprisingly difficult to define and measure fairness. Fairness is a human, not a mathematical, determination, grounded in shared ethical beliefs. Thus, algorithmic decisions that may have a serious consequence for people will require human involvement.

For example, while the training data discrepancies in the COMPAS algorithm can be corrected, human interpretation of fairness still matters. For that reason, while an algorithm such as COMPAS may be a useful tool, it cannot substitute for the decision-making that lies within the discretion of the human arbiter.

In the decision to create and bring algorithms to market, the ethics of likely outcomes must be considered—especially in areas where governments, civil society, or policymakers see potential for harm, and where there is a risk of perpetuating existing biases or making protected groups more vulnerable to existing societal inequalities. We suggest that this question is one among many that the creators and operators of algorithms should consider in the design, execution, and evaluation of algorithms, which are described in the following mitigation proposals.

Our first proposal addresses the updating of U. To develop trust from policymakers, computer programmers, businesses, and other operators of algorithms must abide by U. Historically, nondiscrimination laws and statutes unambiguously define the thresholds and parameters for the disparate treatment of protected classes.

Enacted in , the Equal Credit Opportunity Act stops any creditor from discriminating against any applicant from any type of credit transaction based on protected characteristics. While these laws do not necessarily mitigate and resolve other implicit or unconscious biases that can be baked into algorithms, companies and other operators should guard against violating these statutory guardrails in the design of algorithms, as well as mitigating their implicit concern to prevent past discrimination from continuing.

We need to find a way to protect those who need it without stifling innovation. Moreover, when creators and operators of algorithms understand that these may be more or less non-negotiable factors, the technical design will be more thoughtful in moving away from models that may trigger and exacerbate explicit discrimination, such as design frames that exclude rather than include certain inputs or are not checked for bias.

Once the idea for an algorithm has been vetted against nondiscrimination laws, we suggest that operators of algorithms develop a bias impact statement, which we offer as a template of questions that can be flexibly applied to guide them through the design, implementation, and monitoring phases.



Comments

Popular posts from this blog

Adobe Illustrator CC Free Download [Updated ]- Get Into PC - Item Preview

Download photoshop cc bagas31 - download photoshop cc bagas31. adobe photoshop cs2 bagas31