• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 开源日报第424期:《提供各种语言 learnxinyminutes-docs》

    13 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《提供各种语言 learnxinyminutes-docs》
    今日推荐英文原文:《How Much Employee Surveillance Is Too Much?》

    今日推荐开源项目:《提供各种语言 learnxinyminutes-docs》传送门:GitHub链接
    推荐理由:一个各种编程语言学习资料的合集项目。这个项目最大的特点就在于它不限制于语言,教程提供了各种语言的版本,只要筛选一下就能轻松找到相应语言——不过有的时候比起看翻译版,看原版会更加准确,而且并不是什么时候都有中文翻译版本的,在大部分时候拿时间学个英文比等着中文翻译快得多。
    今日推荐英文原文:《How Much Employee Surveillance Is Too Much?》作者:Brian McIndoe
    原文链接:https://medium.com/future-vision/how-much-employee-surveillance-is-too-much-bec1d249a4b5
    推荐理由:AI 如果用于对员工的监控,将可能引发新的道德问题

    How Much Employee Surveillance Is Too Much?

    On accepting an offer of employment, we expect that many of our privacy rights will be relinquished when using company systems. We are told that business email is company property and may be read by management. Web surfing is logged, and excessive non-business usage could be held against you. Company phone and voicemail are also subject to surveillance. This has been the case for decades and should not come as a surprise.

    What may be a surprise though is the scale and power of the data collection, analysis, and prediction capabilities that companies are now deploying, that go way beyond traditional workplace monitoring.

    A new terrain littered with potential minefields

    The conventional way to gauge employee satisfaction with workplace conditions, compensation, benefits, and management, has been a survey, administered periodically by the HR department. While survey data may be informative, it’s giving way to real-time, automated data collection with sensors embedded directly in employee workstations and conference rooms. Machine learning and other AI techniques are used to make inferences about employee behavior and predict future events.

    This new direction is known as people analytics and it’s become big business. Vendors are in a gold rush to supply organizations of all sizes with this technology, claiming it helps optimize workforce productivity, fosters collaboration, increases happiness and well-being while improving retention.

    A bold new world has opened up for the HR department, but it’s a potential minefield for employers who fail to take ethical considerations into account and see it as a way to apply pressure to employees.

    Implementation of people analytics is occurring far faster than regulations governing its use can be developed. To his credit, President Trump recently signed an executive order calling for government agencies to start work on an AI Initiative and highlighted civil liberties as an area that needs to be thought through. His directive, however, has received criticism for lack of funding and being light on details.

    Several 2020 presidential candidates have stepped forward voicing concerns about the impact of workplace AI. At his announcement address, Senator Bernie Sanders said:
    “I’m running for president because we need to understand that artificial intelligence and robotics must benefit the needs of workers, not just corporate America and those who own that technology.”
    Senator Kamala Harris, in letters to government agencies, has expressed concern about biased algorithms, particularly those behind facial recognition systems, stating:
    “While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases.”
    Andrew Yang has an apocalyptic vision for the mass adoption of AI in the workplace. He sees it as part of the greatest economic transformation in world history:
    “…the worst case scenario, unfortunately, is chaos, violence, and a disintegration of our way of life.”
    Many others are sounding the alarm on what they see as a Pandora’s box of potential employee abuse. The IEEE formed a Global Initiative which published guidelines and general principles on how autonomous and intelligent systems should be integrated into the workplace. The AI Now Institute, established to research the social implications of AI, recently published an article on diversity issues and discrimination in AI.

    Vendors have expressed a mix of excitement and trepidation about the potential of people analytics. Dr. Louis Rosenberg, CEO of Unanimous AI, in a recent interview, said,
    “These tools will not just be documenting what we say and what we type, but will be observing our reactions, predicting our next steps and tracking the accuracy of their forecasts, even documenting our moods.”
    He followed up with,
    “Unless regulations prevent it, AI-enabled virtual assistants will become the eyes and ears of HR departments, disseminating data on the work habits and enthusiasm of employees.”

    A new form of Capitalism?

    Since the industrial revolution, capitalism and technology have formed a symbiotic relationship, exploring new areas of profit maximization. The latest evolution has been given the designation “Surveillance Capitalism”, a term coined by Shoshana Zuboff in her book, The Age of Surveillance Capitalism. Zuboff, in an interview with The Harvard Gazette, defines surveillance capitalism as,
    “…the unilateral claiming of private human experience as free raw material for translation into behavioral data.”
    Vivid examples of this occur on the web. Who hasn’t had the experience of searching for a consumer item, then being peppered with ads for that item for days on end?

    In a recent interview with the Guardian, Zuboff illustrated developments claiming,
    “Once we searched Google, but now Google searches us. Once we thought of digital services as free, but now surveillance capitalists think of us as free.”
    This new strain of capitalism has made the jump from commercialism to the workplace in the form of people analytics, raising profound implications for how behavior can be manipulated for organizational ends. Zuboff relates a conversation she had with a data scientist who said,
    “We can engineer the context around a particular behavior and force change that way… We are learning how to write the music, and then we let the music make them dance.”

    New ways to collect, analyze and predict using employee data

    A question we must all confront is how intrusive would technology have to be before we say — enough? This is the reality many now face in their work lives and is likely to become more so in our private lives, as we let more intelligent gadgets with embedded sensors into our homes and cars. Are we moving by stealth toward a future similar to China’s notorious social credit system?

    Known for pushing the edge of technology, Amazon is now marketing it’s Alexa voice-activated devices to business. The creative ways in which this technology could be used are stunning. Virtually every conversation in an organization can now be recorded, analyzed and inferences drawn about participants. Discussions can be scrutinized for tone and sentiment, dominant players can be sifted from the followers.

    Vendors market these tools as a better way to identify key individuals and more efficiently distribute tasks to reduce stress and overwork. A UK based company, Status Today, sells a product called Isaak, with 820 companies as clients. Featured in a recent article, Isaak’s developers say it offers companies,
    “real-time insights into each employee and their position within the organizational network.”
    Though the CEO admits that “there’s always a risk that it might be misused”, should employers decide to apply it only to boost productivity while ignoring wellbeing.

    Companies implement security barriers to protect both property and employees, with access usually controlled by a security badge. Humanyze, the creation of two former MIT students, has morphed the common old security badge into a smart badge, equipped with powerful monitoring features.

    Digital Trends describes the capabilities of these smart badges as including “radio frequency identification (RFID) and near field communications (NFC) sensors … Bluetooth, an infrared detector capable of tracking face-to-face interactions, an accelerometer, and two microphones.” The office space is equipped with beacons that detect an employee’s location. The company stresses that the data collected by this technology, up to 4GB per day per employee, is aggregated and anonymized before being sent to management.

    Ben Weber, CEO of Humanyze, points to unexpected learnings that have come about from performing analysis of the data collected. As an example, one client,
    “discovered that coders who sat at 12-person lunch tables tended to outperform those who regularly sat at four-person tables.”
    Larger lunch tables were “driving more than a 10% difference in performances”. A fact that would probably have gone undetected without such data analysis.

    The unintended consequences of total employee surveillance

    Credit: Unsplash Employee well-being and reduced stress are touted as a benefit of increased workplace surveillance. One can’t help wondering whether employee knowledge of the vast quantities of data vacuumed up and picked over by proprietary algorithms, will lead to outcomes opposite of those intended. No one likes to be micromanaged, and these developments appear to take it to a whole new level. This thought is articulated by Ursula Huws, a professor at the University of Hertfordshire in the UK, who said,
    “If performance targets are being fine-tuned by AI and your progress towards them being measured by AI, that will only multiply the pressure.”
    Meetings, often the bane of corporate life, are likely to become even more vexing as participants, aware that their words and gestures are being dissected, rated and ranked, jostle for dominance. At least some of them will, others are likely to tune out the noise and plot their escape. The software development business, where Asperger’s and personality quirks are legion, may have a difficult time adapting to the level of monitoring and assessment afforded by these technologies. It remains to be seen whether companies that offer a surveillance-free environment, end up having an edge in recruiting and keeping top technical talent.

    Ethical issues will dominate as the power and scale of this technology grow. Much needed clarification on the inherent privacy issues and development of regulations to set limits, will no doubt be fodder for many future court cases.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第423期:《其他 CSS 能做到吗 css-only-chat》

    12 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《其他 CSS 能做到吗 css-only-chat》
    今日推荐英文原文:《AI & Ethics: Are We Making It More Difficult On Ourselves?》

    今日推荐开源项目:《其他 CSS 能做到吗 css-only-chat》传送门:GitHub链接
    推荐理由:CSS 的能力是有极限的。我从短暂的人生当中学到一件事……越是玩弄代码,就越会发现 CSS 的能力是有极限的……
    异议あり!CSS 的话,就算是即时聊天也做给你看!

    今日推荐英文原文:《AI & Ethics: Are We Making It More Difficult On Ourselves?》作者:Patrick McClory
    原文链接:https://medium.com/@pmdev/ai-ethics-are-we-making-it-more-difficult-on-ourselves-2783e48c95d2
    推荐理由:随着 AI 的不断发展,道德问题也变得越来越重要,兴许在以后 AI 也会拥有类似机器人三原则一样不可逾越的行为底线。

    AI & Ethics: Are We Making It More Difficult On Ourselves?

    Not too long ago we discussed the AI Apocalypse as it pertained to the Facebook #TenYearChallenge. Is Facebook evil? Are we evil for helping usher in our own demise? As we put it: not quite. However, AI & ethics seem inexorably linked and for good reason. This is part of an ongoing series on the question of AI and ethics. And there’s no better place to start than with science fiction, of course.

    The question of what artificial intelligence could be capable of has captured our imaginations for a long while. The truth is, the idea may stretch as far back, at least in concept, to Ancient Greece.

    To get philosophical, the idea of what humankind’s creations could be capable of is not new. Neither is the notion of how we would contend with this. However, at least in a modern sense, Isaac Asimov was instrumental in thrusting the question into the public debate. At least as it pertains to artificial intelligence. Namely, robots. What could robots do? And how could we stop them?

    Thankfully, Asimov had a solution. The Three Laws of Robotics were first introduced in the 1942 short story “Runaround.” Asimov provides a set of guidelines which were key components of all robot programming:
    • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    Is that Enough?

    The concept of AI & ethics is nothing new. Nor should it be. We believe strongly that anyone who occupies this space should be thinking, evaluating, and considering the ethical implications of AI. Today, as well as tomorrow.

    The notion of robots turnings on their creators makes for great science fiction. However, we aren’t quite there yet. And it’s a good thing, according to Scientific American which doesn’t think Asimov’s laws would even work:
    While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots. ——By Christoph Salge, The Conversation US on July 11, 2017
    Our modern day concerns with regard to AI & ethics are typically less about robots taking over the world and more about securing data from theft, preventing algorithmic biases, and the responsible way in which we approach AI, data, and more.

    True, it’s slightly less grandiose than worrying about how to keep us from building Skynet; but nonetheless there are important concerns regarding AI & ethics which warrant attention and scrutiny.

    Recent articles have raised these questions yet again. Most recently 4 Ways AI Education and Ethics Will Disrupt Society in 2019 and perhaps more pointedly: Is It Possible For AI To Be Ethical?

    Well, is it possible?

    Yes. And No. But Mostly Yes. Maybe.

    We’re generating a lot of data these days. A lot. And there are a lot of concerns of what we’re doing with that data. Understandably and smartly so. However, there are also concerns that we’re perhaps doing more harm than good. Or at least, from an endpoint perspective, we’re making things more difficult for ourselves.

    The General Data Protection Regulation (GDPR) which was implemented in the EU last year is one such example of something that is making things more difficult for us.

    From certain perspectives, it’s hard to argue with the notions set forth in GDPR. Namely that organizations have an ethical responsibility to handle your data properly, not share it, and keep it protected. All good things in theory.

    However, for better or for worse, it’s also a wall. And walls can undoubtedly keep bad things from getting in. However, it can also keep good things from getting out.

    What GDPR succeeds in doing, partly by design and partly unintentionally, is to cordon off data from the outside world. Is that a good thing? Well, not always.

    Contrast these ideas with widespread accusation and belief there are biased algorithms everywhere in Silicon Valley. That certain groups benefit from what should be (at least in certain minds) unbiased equations.

    Now, consider how these algorithms are created. Or more importantly, where they are created.

    The Walled Off Data Problem

    In the past, we were accustomed to obtaining data from a single source. Or at least, very few sources. And by “data” we mean thousands upon thousands of bits of information that all put together creates a coherent, workable model of algorithmic goodness.

    The problem which is perhaps unintentionally created by restrictive data protection laws is that it makes data harder to come by legally. Because of concerns regarding AI & ethics, we’re walling off data like never before. Keeping it restricted.

    Now that may not sound like a bad thing if your mind conjures up images of a telemarketing firm looking to create a model so they knew who to call and bother at dinnertime. It may be a bad thing if you’re a University Medical Research Department building a model to predict, diagnose, or even cure disease.

    We’ve spoken at length in the past about “the silo problem” as it relates to development and deployment. Specialized teams are able to exhibit hyper-focused attention to one specific aspect of the problem. However, it doesn’t necessarily yield the best results or the best end product.

    The same can be said of approaching data in a silo. To tackle the world’s problems, or even attempt to do so, we need access to a lot of data. And as growing restrictions further cordon off that data, we run the risk of biasing our own data pool.

    To be clear: when we talk about being able to gather data in one place: we mean a wide array of data which is accessible from a single source; but not a wide array of data that originates from a single source.

    Let’s Bake Some Bread

    For example, it’s great to be able to go to a supermarket where we can purchase, bread, milk, meat, and vegetables all in one place. The supermarket is a great source of a lot of different types of products (data). If we wanted to build an algorithm to track or predict what groceries people purchase, a supermarket would be a good place to start.

    Why? Because we know that the shoppers there are going to purchase a wide variety of items, across different types and variants. We’ll be able to view a veritable ton of data to build our model.

    Now, let’s suppose that supermarkets didn’t exist. Indeed, it may be seen as “safer” or “better” to get your milk from a milkman, your produce from a vegetable market, or your bread from a baker specifically. However, it’s far less convenient and far more restrictive.

    If you are purchasing your bread from a single source: you are beholden to that single source and all the characteristics of that source. How then, are we do build a model to track grocery purchases when we only have easy access to the baker’s data?

    This is how we wind up unintentionally biasing our own algorithms.

    Open Boarders Data

    This is not to say there is not culturally significant social biases that can be built into algorithms or data practices. They absolutely can be. However, it is becoming increasingly difficult to build culturally significant and culture-spanning models because of the increasing difficulty in legally obtaining data across certain roadblocks.

    As a result, a model built in Silicon Valley might reflect the demography of Silicon Valley. A model built in India might reflect the demography of India, and so on. And one of the issues we are faced with is this one-size-fits-all approach is that it becomes difficult to meaningfully create a model from a set of data which may not reflect all users, all components, or even reach a realistic, ideal, or meaningful outcome if the data has been previously biased in one way or another.

    Again, not a bad thing if we’re stopping telemarketing in most people’s eyes. It can be a bad thing if we’re using concerns of AI & ethics to cut off our own nose to spite our face.

    The future of data collection and analysis is likely to look more like this: collect locally, repeat globally. It’s a longer and more involved process to be sure. However, the greater the push for enhanced data protection, the more restrictive access will be come.

    So What Do We Do?

    To a large degree, the conversation over AI & ethics is just getting started. And that’s a good thing. Because as we said earlier, we believe there’s an inherent responsibility for those who operate in this space to continue to ask these questions. Namely, are we behaving ethically? Are we contributing meaningful thought as well as action to the public space and public debate surrounding these questions. As the technologies evolve, these questions need to continue to be asked.

    To a degree, we believe that personal (and corporate responsibility) have to come into play. Government regulation can and will assist in pointing out the correct path. However, it will come with its own drawbacks and downsides as mentioned above.

    There are good reasons for wanting regulation such as GDPR and the tightening regulations in the USA as well. However, there are unintentional downsides such as those outlined above. It also makes it difficult for newcomers to the space to get started. This relegates operations to a select few who have the means, resources, an connections to move in this space.

    To a degree, the ethical treatment of AI may ultimately rest with those who control it. We may be a long way off from having to realistically worry about a robot uprising. Thankfully. That doesn’t mean there aren’t concerns with regard to bad actors in this space.

    We have a responsibility to use AI responsibly. That doesn’t mean there won’t be mistakes, missteps, and mishaps along the way. It would be foolish to think otherwise. However, the question of AI and ethics is also a fundamentally human one. As human as the human beings who write the code which implements Asimov’s Three Laws of Robotics.

    What happens when a bad actor “forgets” or omits this code? What happens when those charged with safeguarding data seek to misuse it? Not to wax too philosophical, but the question surrounding how ethical AI can be will, for the time being, rest ultimately within the confines of the ethical possibilities of human behavior.

    We, of course, have free will. Unlike our robot underlings. For now.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第422期:《精2的命令行 Terminal》

    11 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《精2的命令行 Terminal》
    今日推荐英文原文:《In the future, you may be fired by an algorithm》

    今日推荐开源项目:《精2的命令行 Terminal》传送门:GitHub链接
    推荐理由:操作系统里自带的东西大都要受到一个叫做“向后兼容”的玩意的制约——这个限制让你在更新软件时要注意旧版本软件能够和新版本一起工作。而这次 Windows 打算脱离旧版的限制,开一个新的命令行工具来加入更好用的功能,第一个发行版预计在下个月才会出现,可以期待一下他们提供的新功能。
    今日推荐英文原文:《In the future, you may be fired by an algorithm》作者:Michael Renz
    原文链接:https://towardsdatascience.com/in-the-future-you-may-be-fired-by-an-algorithm-35aefd00481f
    推荐理由:算法做出的决策需要变得更加公平——毕竟创造算法的数据本身就是由人类创造的,而人类不可避免的具有某些偏见

    In the future, you may be fired by an algorithm

    Algorithms determine the people we meet on Tinder, recognize your face to open the keyless door or fire you when your productivity drops. Machines are used to make decisions about health, employment, education, vital financial and criminal sentencing. Algorithms are used to decide, who gets a job interview, who gets a donor organ or how an autonomous car reacts in a dangerous situation.

    Algorithms reflect and reinforce human prejudices

    There is a widespread misbelief that algorithms are objective because they rely on data, and data does not lie. Humans have the perception that mathematical models are trustworthy because they represent facts. We often forget that algorithms have been created by humans, which selected the data and trained the algorithm. Human-sourced bias inevitably creeps into AI models, and as a result, algorithms reinforce human prejudices. For instance, the Google Images search for “CEO “produced 11 percent women, even though 27 percent of United States chief executives are women.

    Understanding human-based bias

    While Artificial Intelligence (AI) bias should always be taken seriously, the accusation itself should not be the end of the story. Investigations of AI bias will be around as long as people are developing AI technologies. This means iterative development, testing, and learning, and AI’s advancement may go beyond what we previously thought possible — and possibly, even going where no human has gone before. Nevertheless, biased AI is nothing to be taken lightly, as it can have serious life-altering consequences for individuals. Understanding when and in what form bias can impact the data and algorithms becomes essential. One of the most obvious and common problems is the sample bias, where data was collected in such a way that some members of the intended population are less likely to be included than others. Consider a model used by judges to make sentencing decisions. Obviously, it is unsuitable to take the race into consideration, because of historical reasons African-Americans are disproportionally sentenced, which leads to racial bias in the statistics. But what about health insurance? In this context, it is perfectly normal to judge males different to females, but to an AI algorithm, that prohibition might not be so obvious. We, humans, have a complex view on morality, sometimes it’s fine to consider attributes like gender, sometimes it’s against the law.

    The problem with AI ethics

    Data sets are not perfect, they are dirty by nature and at the moment, we start cleaning human-bias creeps in. Anyway, the key problem is not the existence of bias, the deep problem is the absence of a common understanding of morality because it’s actually not defined how the absence of bias should look like. This isn’t true just in computer science, it’s a challenge of humanity. One of the biggest online experiments about AI ethics in the context of autonomous cars is the “Trolley dilemma”, which collected 40 million moral decisions of people in 233 countries. Unsurprisingly, the researchers found that countries’ preferences differ widely: Humans have different definitions of morality. The trolley problem is just an easy way to understand the deepness and complexity of ethics in data science, needless to say, that it goes beyond self-driving cars, which leads to the question of how a machine can make “fair” decisions by acknowledging individual- and cross-cultural ethical variation? Can we expect an algorithm that has been trained by humans to be better than society?

    Moral Dilemma where a self-driving car must make a decision

    Creepy things A.I. starts to do on its own

    Admitting, that we are decades away from the science fiction scenario, where AI becomes self-aware and is taking over the world. As of today, algorithms are not static anymore, they evolve over time by making automatic modifications to its own behavior, this modification can introduce so-called AI-induced bias. An evolution that can go far beyond what was initially defined by humans. At some point of time we may need (or not) just accept the outcome by having trust in the machine?!

    Way forward: Design for fairness

    Moving forward, our reliance on AI will deepen, inevitably causing many ethical issues to arise, as humans move decisions to machines putting health, justice, and businesses into the hands of algorithms with far-reaching impact — affecting everyone. For the future of AI, we’ll need to answer tough ethical questions, which require a multi-disciplinary approach, awareness on a leadership level, and not be handled only by experts in technology. Data Science knowledge needs to evolve to a standard skill for the future workforce. Similar to fire, machine learning is both powerful and dangerous, it is vital that we figure out how to promote the benefit and minimizes harm. As it’s not limited to the bias that concerns people, it’s also about providing transparency in decision making, clarification of accountability and about safeguarding core ethical values of AI, such as equality, diversity, and lack of discrimination. Over the next years, the ability to embed ethical awareness and regulations will emerge to a key challenge, and only can be tackled via a close collaboration of government, academics, and business. Regardless of the complexity of the challenge its exciting times to be alive and sharpen the AI revolution, which has the potential to improve the lives of billions of people around the world.

    Written by

    Cyrano Chen — Senior Data Scientist Joanne Chen- Head of Communication Michael Renz — Director of Innovation
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第421期:《时间减速 comcastifyjs》

    10 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《时间减速 comcastifyjs》
    今日推荐英文原文:《Refactor Your Project, One Step at a Time》

    今日推荐开源项目:《时间减速 comcastifyjs》传送门:GitHub链接
    推荐理由:现在网速是越来越快了,基本上就很少见到需要等读条加载的情况……然后就有了这个项目,让你的图片能够慢慢的加载出来——至于有多慢你可以自己决定。这个 JS 库允许你预读加载图片之后假装网速很慢的显示它们,甚至还能模拟一下网络延迟所造成的加载停顿,虽然在正式项目里基本起不到作用,但是娱乐效果和节目效果依然值得期待。
    今日推荐英文原文:《Refactor Your Project, One Step at a Time》作者:Faith Chikwekwe
    原文链接:https://medium.com/@faith.chikwekwe/refactor-your-project-one-step-at-a-time-21838df431ba
    推荐理由:重构实际上是迟早的事情,这篇文章介绍的是如何从小步骤开始着手重构。

    Refactor Your Project, One Step at a Time

    So you have a project that wasn’t your best? Maybe you wrote it at a hackathon. Maybe you had a tight deadline. Or maybe you’ve gotten much better as a programmer since you wrote it.

    So what should you do?

    You could bury it at the bottom of your Github profile and forget about it. Sometimes this isn’t a bad thing. There are some projects which are not worth going back to.

    But if you’re reading this article, you care about this project.

    You believe in it.

    You want to make it something that you can be proud of.

    Don’t worry! We’ll go step by step through the refactoring process and discuss best practices to help you through your coding makeover.

    Before you Start, Make a Plan.


    Photo by Med Badr Chemmaoui on Unsplash

    Before you can embark on this type of programming journey, you need to make a plan. There are a few core decisions that will help provide you with a framework that you can follow throughout your refactor.
    1. Design the overall goal of the project: What is your product trying to accomplish? How does your project deliver upon that goal? Who is the user or the recipient of the product? How will they interact with it and what will they take away from it?
    2. Make a more refined goal for the outcome of this refactor: How will your improved code make your project better? Are you trying to improve things for the user by improving performance or making a better user experience? Are you trying to make things better for yourself and other engineers by adding comments to your code, pulling large chunks of code apart into separate functions or making your code DRY-er (DRY means Don’t Repeat Yourself).
    3. Make a framework for the goals of each class, route or major function: What is the particular thing that this class achieves? Why is this function necessary? What other systems does it interact with? What parameters does it take in and what objects does it return?

    Start Small.


    Refactoring is a step by step process. Photo by Lindsay Henwood on Unsplash

    Okay! Now you’ve got a plan.

    A general outline of where this refactor will take you.

    You’ve renewed your understanding of the goals of your project. But how do you decide what part of the project to refactor first?

    The steps for every section of your refactor are the same.

    Pseudocode, write tests, show your code to others, and improve it until it pass tests and passes your plan.

    It is important to remember to start small when you pick the first section that you will improve.

    Refactoring is hard.

    Make it easier on yourself by starting with some low hanging fruit.
    • That POST route that works, but isn’t very pretty.
    • That class method that you already kind of know how to detangle.
    If there isn’t an obvious place to begin, then just get in there and start.

    Anne, an instructor of mine, once told me that the secret to writing good tests (and good code in general) is to just get started.

    Write Some Pseudocode.


    Photo by Campaign Creators on Unsplash

    You’ve got your soon-to-be-awesome function all picked out, and now it is time to write some pseudocode for it.

    The secret to writing good pseudocode is to write something that you will understand as a programmer and that non-technical stakeholders will also get.
    • Use indentation and whitespace effectively to mimic code spacing while keeping your pseudocode in plain English.
    • Try using variable names. If you do, name them semantically (in plain English and with purpose) and use them consistently throughout your pseudocode.
    • Use code-like terms to help explain what you might do at each step. Instead of writing for i in range(word_list), write loop over the word_list until you get to the end.
    Now, let’s talk about non-technical stakeholders. You are making a website for your buddy’s dog-walking business. You want to tell him how you would like the user experience to play out as they navigate.

    Write your pseudocode so that you can show it to him. It doesn’t have to be exactly down to his level, but make it somewhat accessible.

    Even if you don’t have non-technical people involved, writing to this level will help keep the pseudocode friendly as a reference point for your conversations your future self and with other developers.

    Write New Tests or Perfect your Existing Ones.


    Photo by John Schnobrich on Unsplash

    It is time to write some code!

    This guide is following the principles of Test Driven Development or TDD. If you prefer to code first and test later, you can do this step last.

    If you’re starting from scratch with your tests, you’ll want to start by writing some unit tests.

    Testing varies from language to language, so look up the guide for your preferred testing framework and follow proper conventions.

    Once you’ve gotten some basic tests written, or if you’ve already got tests written for this class or function, then review your tests for the following:
    • Does my test cover every possible outcome? e.g. Every condition in a conditional statement; every button that the user can click, etc.
    • Is my test readable? Understandable? Does it need comments that explain its aim?
    Once you’ve got good a test or a handful of tests, it is time to make changes to your actual code.

    Write Better Code.


    Photo by Markus Spiske on Unsplash

    You’ve got a plan, you’ve written pseudocode and you’ve got working tests that are descriptive and small in scope.

    Now it is time to make your code more functional, readable and performant.

    Some things to look out for as your improve your code:
    • Reduce complexity. If you have highly nested and complex code, make sure that any nesting is truly necessary. You can often reduce cyclomatic complexity by drawing diagrams.
    • Add comments to explain complex portions of your code. Use those comments to explain the goal of each line of code and the expected outcome, if applicable.
    • Rename variables to be more semantic. Using variable names like i and x have their place, but generally they don’t help with code readability.
    • Break up you code. Cheeky one-liners can be nice in a pinch, but they tend to reduce the readability of your code. Consider breaking multiple function calls apart and using more variables throughout your code.

    Work With Others.


    Two heads are better than one. Photo by Kobu Agency on Unsplash

    Once you’ve got something that you think is passable, you’re not done!

    It is time to share your code with other programmers. This can also be done alongside the other steps in this process.

    One of the most powerful tools out there is pair programming. This is when you sit side by side with another coder: one of you tells the other what to do (the navigator) and the other (the driver) types into the IDE (Independent Development Environment). The idea is to pass these roles back and forth every 10 minutes or so to increase the flow of ideas.

    Pair programming can increase efficiency and productivity and also reduce errors. If you’d like to know a little bit more about pair programming, click here.

    If don’t have the option of pair programming or cannot because of scheduling and time constraints, ask for code review from a colleague, a programmer friend or a classmate.

    Any input from another person who understands your code will help to improve it. Getting feedback is a vitally important step.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
←上一页
1 … 153 154 155 156 157 … 262
下一页→

Proudly powered by WordPress