• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 2018年4月24日:开源日报第47期

    24 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《异步与异步编程》

    推荐理由:今天不算特别介绍某一个开源项目,在第12期开源周报《朝看花开满树红,正在教室学习中》中介绍了 GINO —— 一个为 SQL asyncio 构建的轻量级异步 ORM,而这一次要介绍的就是何为异步。

    为完成某个任务,不同程序单元之间过程中无需协调通讯,也能完成任务的方式,就可以称为异步。对于异步,首先,不相关的程序单元之间是可以异步的。例如,爬虫下载网页。调度程序调用下载程序后,即可调度其他任务,而无需与该下载任务保持通信以协调行为。不同网页的下载、保存等操作都是无关的,也无需相互通知协调。这些异步操作的完成时刻并不确定。

    简而言之,异步意味着无序。

    而异步编程是指以进程、线程、协程、函数/方法作为执行任务程序的基本单位,结合回调、事件循环、信号量等机制,以提高程序整体执行效率和并发能力的编程方式。

    如果在某程序的运行时,能根据已经执行的指令准确判断它接下来要进行哪个具体操作,那它是同步程序,反之则为异步程序。这也就是有序与无序的区别。

    同步与异步

    同步和异步相对。同步指的是不同的程序单元为了完成某个任务而依靠一些通信方式协调一致的方式,而异步指的是不同程序单元无需依靠通信协调也能完成任务的方式。

    同步的优缺点

    同步流程最大的好处就是有序,可以更简单的处理结果,也能很容易的发现错误并改正。同步流程也符合我们的自然思想,使编程变得简单,程序也更容易掌控。

    同步的缺点就在于交互方面,如果通信效率下降或者是执行任务所需要的时间过长,程序就需要花费更多的时间暂停自己去等待任务的完成,而在程序等待着任务完成的这段时间里,它是无法执行其他行动的,这就相当于无谓的浪费时间和资源。

    异步的优缺点

    异步意味着无序,一个任务开始之后就可以进行其他操作,而它什么时候完成是未定的,而这带来的最大好处就是在等待结果的这段时间里可以进行其他操作,提高了程序的效率。

    而异步也是有缺点的,那就是难以控制程序,这个程序什么时候发生了什么事件是不可预料的。而同步代码改为异步代码也会需要重新布置代码结构,精心安排异步的任务,还需要利用回调函数,对于新手来说着实有些困难。

    异步编程的前途

    异步编程的发展前景,可以说是一片大好,具体原因嘛,看看这些大公司的动作就知道了。

    .NET 与C# 的每个版本发布都是有一个“主题”。即:C#1.0托管代码→C#2.0泛型→C#3.0LINQ→C#4.0动态语言→C#5.0异步编程。C#5.0的主题是异步编程,这说明了啥?说明了异步编程是大势所趋。

    在2010年的 PDC 上,微软发布了 Visual Studio Async CTP,大大地降低了异步编程的难度,让我们可以像写同步的方法那样去编写异步代码。微软为什么花精力让异步编程变得简单?因为市场需求嘛,这样能到让开发者更轻松。而很多开发者都看好的技术,我觉得是靠谱的。

    从上两届 PyCon 技术大会看来,异步编程已经成了 Python 生态下一阶段的主旋律。如新兴的 Go、Rust、Elixir 等编程语言都将其支持异步和高并发作为主要“卖点”,技术变化趋势如此。Python 生态为不落人后,从2013年起由 Python 之父 Guido 亲自操刀主持了 Tulip(asyncio) 项目的开发。很显然,大家都意识到了异步编程将会掀起一场技术的风暴,都纷纷加入异步编程的大军中来。你还能说异步编程前景不好吗?


    今日推荐英文原文:《How Google autocomplete works in Search》作者:Danny Sullivan

    原文链接:https://www.blog.google/products/search/how-google-autocomplete-works-search/

    推荐理由:Google 的自动搜索是怎么完成的呢?很多朋友都很好奇,这里是 Google 官方的答案

    How Google autocomplete works in Search

    Autocomplete is a feature within Google Search designed to make it faster to complete searches that you’re beginning to type. In this post—the second in a series that goes behind-the-scenes about Google Search—we’ll explore when, where and how autocomplete works.

    Using autocomplete

    Autocomplete is available most anywhere you find a Google search box, including the Google home page, the Google app for iOS and Android, the quick search box from within Android and the “Omnibox” address bar within Chrome. Just begin typing, and you’ll see predictions appear:

    In the example above, you can see that typing the letters “san f” brings up predictions such as “san francisco weather” or “san fernando mission,” making it easy to finish entering your search on these topics without typing all the letters.

    Sometimes, we’ll also help you complete individual words and phrases, as you type:

    Autocomplete is especially useful for those using mobile devices, making it easy to complete a search on a small screen where typing can be hard. For both mobile and desktop users, it’s a huge time saver all around. How much? Well:

    • On average, it reduces typing by about 25 percent
    • Cumulatively, we estimate it saves over 200 years of typing time per day. Yes, per day!

    Predictions, not suggestions

    You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.How do we determine these predictions? We look at the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.

    The predictions change in response to new characters being entered into the search box. For example, going from “san f” to “san fe” causes the San Francisco-related predictions shown above to disappear, with those relating to San Fernando then appearing at the top of the list:

    That makes sense. It becomes clear from the additional letter that someone isn’t doing a search that would relate to San Francisco, so the predictions change to something more relevant.

    Why some predictions are removed

    The predictions we show are common and trending ones related to what someone begins to type. However, Google removes predictions that are against our autocomplete policies, which bar:

    • Sexually explicit predictions that are not related to medical, scientific, or sex education topics
    • Hateful predictions against groups and individuals on the basis of race, religion or several other demographics
    • Violent predictions
    • Dangerous and harmful activity in predictions

    In addition to these policies, we may remove predictions that we determine to be spam, that are closely associated with piracy, or in response to valid legal requests.

    A guiding principle here is that autocomplete should not shock users with unexpected or unwanted predictions.

    This principle and our autocomplete policies are also why popular searches as measured in our Google Trends tool might not appear as predictions within autocomplete. Google Trends is designed as a way for anyone to deliberately research the popularity of search topics over time. Autocomplete removal policies are not used for Google Trends.

    Why inappropriate predictions happen

    We have systems in place designed to automatically catch inappropriate predictions and not show them. However, we process billions of searches per day, which in turn means we show many billions of predictions each day. Our systems aren’t perfect, and inappropriate predictions can get through. When we’re alerted to these, we strive to quickly remove them.
    It’s worth noting that while some predictions may seem odd, shocking or cause a “Who would search for that!” reaction, looking at the actual search results they generate sometimes provides needed context. As we explained earlier this year, the search results themselves may make it clearer in some cases that predictions don’t necessarily reflect awful opinions that some may hold but instead may come from those seeking specific content that’s not problematic. It’s also important to note that predictions aren’t search results and don’t limit what you can search for.

    Regardless, even if the context behind a prediction is good, even if a prediction is infrequent,  it’s still an issue if the prediction is inappropriate. It’s our job to reduce these as much as possible.

    Our latest efforts against inappropriate predictions

    To better deal with inappropriate predictions, we launched a feedback tool last year and have been using the data since to make improvements to our systems. In the coming weeks, expanded criteria applying to hate and violence will be in force for policy removals.
    Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics.

    With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.

    As for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims.

    How to report inappropriate predictions

    Our expanded policies will roll out in the coming weeks. We hope that the new policies, along with other efforts with our systems, will improve autocomplete overall. But with billions of predictions happening each day, we know that we won’t catch everything that’s inappropriate.
    Should you spot something, you can report using the “Report inappropriate predictions” link we launched last year, which appears below the search box on desktop:

    For those on mobile or using the Google app for Android, long press on a prediction to get a reporting option. Those using the Google app on iOS can swipe to the left to get the reporting option.

    By the way, if we take action on a reported prediction that violates our policies, we don’t just remove that particular prediction. We expand to ensure we’re also dealing with closely related predictions. Doing this work means sometimes an inappropriate prediction might not immediately disappear, but spending a little extra time means we can provide a broader solution.

    Making predictions richer and more useful

    As said above, our predictions show in search boxes that range from desktop to mobile to within our Google app. The appearance, order and some of the predictions themselves can vary along with this.
    When you’re using Google on desktop, you’ll typically see up to 10 predictions. On a mobile device, you’ll typically see up to five, as there’s less screen space.

    On mobile or Chrome on desktop, we may show you information like dates, the local weather, sports information and more below a prediction:

    In the Google app, you may also notice that some of the predictions have little logos or images next to them. That’s a sign that we have special Knowledge Graph information about that topic, structured information that’s often especially useful to mobile searchers:

    Predictions also will vary because the list may include any related past searches you’ve done. We show these to help you quickly get back to a previous search you may have conducted:

    You can tell if a past search is appearing because on desktop, you’ll see the word “Remove” appear next to a prediction. Click on that word if you want to delete the past search.

    On mobile, you’ll see a clock icon on the left and an X button on the right. Click on the X to delete a past search. In the Google App, you’ll also see a clock icon. To remove a prediction, long press on it in Android or swipe left on iOS to reveal a delete option.

    You can also delete all your past searches in bulk, or by particular dates or those matching particular terms using My Activity in your Google Account.

    More about autocomplete

    We hope this post has helped you understand more about autocomplete, including how we’re working to reduce inappropriate predictions and to increase the usefulness of the feature. For more, you can also see our help page about autocomplete.
    You can also check out the recent Wired video interview below, where our our vice president of search Ben Gomes and the product manager of autocomplete Chris Haire answer questions about autocomplete that came from…autocomplete!


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月23日:开源日报第46期

    23 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《Rough.js轻量级手绘画风图形库》

    推荐理由:Rough.js是一种重量轻(〜8K),基于Canvas的JS库,可以让你画粗略,手绘般的风格。该库定义了绘制线条,曲线,弧线,多边形,圆形和椭圆的基元。它也支持绘制SVG路径。

     

    使用

    首先,你需要使用npm安装它

    你需要创建一个画布:

    <canvas id=”canvas” width=”800″ height=”600″></canvas>

    然后,在<script>x下通过id获取画布:

    const rc = rough.canvas(document.getElementById(‘canvas’));

    之后,通过形如

    rc.line(60, 60, 190, 60);

    rc.rectangle(10, 10, 100, 100);

    rc.rectangle(140, 10, 100, 100, {

    fill: ‘rgba(255,0,0,0.2)’,

    fillStyle: ‘solid’,

    roughness: 2

    });

    的语法愉快的画画画吧

    例如,上面的代码画了两个画风清奇的矩形,并设置了一系列属性,得到如下的图案:

    开源项目精选:Rough.js轻量级手绘画风图形库

     

    然后,让我们画一个开源工场的logo吧

    rc.line(371.92,140.77,198.92,100.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(371.92,278.77,368.92,140.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(372.92,281.77,205.92,299.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(203.92,301.77,201.92,100.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(198.92,104.77,94.92,136.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(94.92,141.77,97.92,272.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(93.92,274.77,205.92,300.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(134.92,152.77,113.92,173.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(152.92,161.77,109.92,198.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(167.92,172.77,114.92,224.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(167.92,197.77,133.92,235.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(174.92,227.77,155.92,247.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(265.92,198.77,236.92,179.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(266.92,199.77,238.92,216.77,{roughness:2,stroke:’#68a1e8′});

    rc.line(301.92,247.77,271.92,248.77,{roughness:2,stroke:’#68a1e8′});

    效果:

    开源项目精选:Rough.js轻量级手绘画风图形库

     

    好了,开始愉快的画画画吧!

    注:这个画出来的东西画风都很清奇(笑)

     

    作者介绍

    Preet Shihn

    旧金山的一名工程师,喜欢听歌,喜欢玩《掘地求生》,热爱关注新闻,喜欢用表情包,在推特上diss川普。他和别人一起搞了个网站,名字叫Channels(https://channels.cc/),这是世界上第一个对内容的多选择微支付市场,你可以在这里发表作品,如果有人看,你就能赚钱

     


    今日推荐英文原文:《Artificial Intelligence — The Revolution Hasn’t Happened Yet》作者:Michael Jordan

    原文链接:https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

    推荐理由:人工智能的革命还远远没有发生?这几年人工智能的发展如火如荼,类似“人工智能手机”、“人工智能摄像头”的说法都层出不穷,无数外行人内行人为这个领域的发展而激动不已,然而美国科学院院士、加州大学伯克里教授 Michael Jordan 却说“人工智能的革命还远远没有发生”,请听听看他是怎么说的。Michael Jordan是人工智能和机器学习领域很有建树,甚至被誉为“机器学习之父”,也是著名华裔科学家吴恩达的恩师。

     

    Artificial Intelligence — The Revolution Hasn’t Happened Yet

    Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.

    There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

    We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

    I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

    Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

    While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.

    Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

    And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.

    The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

    Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and logistics-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems.

    This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny.

    Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years hence, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.

    Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

    Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.

    One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.

    The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.

    Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

    For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers.

    We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.

    Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.

    As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory?

    A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue
    that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to an human-imitative AI agenda.

    It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

    Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.

    IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean).

    It was John McCarthy (while a professor at Dartmouth, and soon to take a
    position at MIT) who coined the term “AI,” apparently to distinguish his
    budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than
    in most fields.)

    But we need to move beyond the particular historical perspectives of McCarthy and Wiener.

    We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.

    This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.

    While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other
    disciplines whose contributions and perspectives are sorely needed — notably
    the social sciences, the cognitive sciences and the humanities.

    On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.

    Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often
    invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.

    In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

    I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.

    Michael I. Jordan


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月22日:开源日报第45期

    22 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《TensorFlow深度学习模型》

    推荐理由:关心深度学习的同学,这里有一串官方整理好的深度学习模型,该存储库包含许多在TensorFlow中实现的不同模型,GitHub 地址是 https://github.com/tensorflow/models

    官方模型

    mnist:对来自MNIST数据集的数字进行分类的基本模型。最开始设计出来的目的是用于识别数字,同时也是深度学习的一个样例。

     

    resnet:一个深度残差网络,可用于CIFAR-10和ImageNet的1000个类别的数据集进行分类。由于深度学习模型的练习次数达到某一个值时识别准确率以及识别性能会下降,因而开发出了可以提高学习深度的网络。

     

    wide_deep:将广泛的模型和深度网络相结合的模型,用于对人口普查收入数据进行分类。经过学习后,神经网络可以通过其中几个数据的值推断出其他数据的值。

     

    研究模型(非官方模型,个人使用)

    adversarial_crypto:保护与对抗式神经密码学的通信。

    adversarial_text:具有对抗训练的半监督序列学习。

    attention_ocr:图像识别文本提取模型(用于高干扰的现实场景)。

    autoencoder:各种自动编码器。

    brain_coder:带强化学习的程序综合器。

    cognitive_mapping_and_planning:为视觉导航实现基于空间记忆的映射和规划体系结构。

    compression:使用预先训练的剩余GRU网络压缩和解压缩图像。

    deeplab:用于语义图像分割的深度标签。

    delf:用于匹配和检索图像的深层局部特征。

    differential_privacy:来自多位教师的学生隐私保护模型。

    domain_adaptation:域分离网络。

    gan:生成对抗式网络。

    im2txt:用于转换图像字幕为文本的神经网络。

    inception:用于计算机视觉的深度卷积网络。

    learning_to_remember_rare_events:用于深度学习的大型终身记忆模块。

    lfads:用于分析神经科学数据的顺序变分自动编码器。

    lm_1b:以十亿单词为基准测试的语言建模。

    maskgan:用GAN生成文本。

    namignizer:识别并生成名称。

    neural_gpu:高度并行的神经计算机。

    neural_programmer:用逻辑和数学运算增强的神经网络。

    next_frame_prediction:通过交叉卷积网络进行概率性的下一帧合成。

    object_detection:定位和识别单个图像中的多个对象。

    pcl_rl:用于几种强化学习算法的代码,包括路径一致性学习。

    ptn:用于三维物体重建的透视变换网。

    qa_kg:模块网络,用于在知识图上进行问题解答。

    real_nvp:使用实值非容量保留(真实NVP)变换的密度估计。

    rebar:离散潜变量模型的低方差,无偏差梯度估计。

    resnet:深层和广泛的残余网络。

    skip_thoughts:递归神经网络句 – 矢量编码器。

    slim:TF-Slim中的图像分类模型。

    street:使用深度学习从图像中识别街道的名称(仅限于法国)。

    swivel:用于生成复合词的Swivel算法。

    syntaxnet:自然语言语法的神经模型。

    tcn:从多视点视频学习的自我监督表示。

    textsum:序列到序列与文本摘要的关注模型。

    transformer:空间转译网络,可以对网络内的数据进行空间处理。

    video_prediction:用神经平流预测未来的视频帧(类似于next_frame_prediction)。

    关于其中的几个项目:

    Mnist实际上是一个简单的视觉计算数据集,目的大概就是为用机器学习练习对数据进行处理。它本身可能没有非常有用的一个应用,只是学习机器学习的‘陪练’。Mnist主要用来训练图像识别相关的机器学习模块

    https://github.com/zalandoresearch/fashion-mnist

    这里有一个很有名也很有趣的mnist数据集fashion-mnist,由60,000个示例的训练集和10,000个示例的测试集组成。每个示例都是28×28灰度图像,与10个类别的标签相关联。(T恤/上衣,裤子,套头衫,连衣裙,大衣,凉鞋,衬衫,运动鞋,包,脚踝靴),作者制作这个数据集的本意是用作验证mnist算法的基准。

     

    wide_deep给的则是用人口普查收入数据预测收入,正如其名字所说的,这是一个深宽模型。也基本是用来验证tensorflow对深宽模型处理的应用。

     

    此外,介绍两个难度较低的tensorflow的项目:

    验证码识别:

    http://blog.csdn.net/sushiqian/article/details/78305340

    验证码识别的原理与对mnist手写数字数据集的处理有异曲同工之妙,也是新手练习的选择之一

     

    五子棋:

    https://github.com/junxiaosong/AlphaZero_Gomoku

    这个五子棋项目模仿alphago做成的项目,十分有趣,对alphago有兴趣的可以了解下


    今日推荐英文原文:《Happy 25th birthday Red Hat Linux!》作者: Steven J. Vaughan-Nichols

    原文链接:https://www.zdnet.com/article/happy-25th-birthday-red-hat-linux/

    推荐理由:今天, Linux 在我们身边随处可见,不过,25年前呢,可不是这样,红帽在开源历史上有着非常重要的地位,也是迄今为止商业化最成功的开源领域公司,现在,它25岁了。

    Happy 25th birthday Red Hat Linux!

    oday, Linux and open-source software rule the tech world. Twenty-five years ago? It was an amateur operating system that only geeks knew about. One of the main reasons Linux got from there to here is Red Hat turned a hobby into an IT force.

    Red Hat co-founder Bob Young — who had run a rental typewriter business — became interested in Linux. In 1993, he founded ACC Corporation, a catalog business that sold Slackware Linux CDs and open-source software.

    Everyone knew, as Young remembered later, “Solaris was much better than Linux, but it was only by using Linux that he could tweak the operating systems to meet their needs.” Young realized that while he couldn’t sell Linux as being better, faster, or having more features than Unix in those days, he could sell one benefit: users could tune it to meet their needs. That would prove to be a key selling point, as it still is today.

    So, he joined forces with Linux developer Marc Ewing, and from Young’s wife sewing closet, they launched Red Hat Linux. Like other early Linux businesses, Red Hat started out by selling diskettes, then servers, services, and CDs.

    Today, in an interview, Young said, “What I love about the story is that it took many great contributors from the free software/open-source communities including Stallman to Torvalds. To Marc and I and our team-mates to Matthew Szulik, and now Jim and his vast team. None of us could have fundamentally changed the way software is developed and deployed without all the others.”

    Young continued, “As my internet software developer son-in-law puts it: he and his colleagues couldn’t do what they do without all the free and open software that Red Hat is both a contributor to and a beneficiary from.” He concluded, “And then there is our families. I would not have been able to make my contribution if my wife Nancy had not been willing to bet our kids’ college education on building a software business on a model never done before.”

    First though, Red Hat had to find the magic formula, which would bring it success while so many other of its contemporaries, such as Calera, TurboLinux, and Mandrake, were left in history’s ashbin.

    Red Hat’s current CEO Jim Whitehurst told me in an interview, “The real contribution we’ve made, besides open-source software, has been the enterprise business model. It’s obvious now, but it wasn’t obvious at the time.”

    I would say so!

    In 2003, Paul Cormier, then Red Hat’s vice president of engineering and now Red Hat’s president of Products and Technologies, led the way to leaving behind its early inexpensive distribution, Red Hat Linux, to move to a full business Linux: Red Hat Enterprise Linux (RHEL).

    Cormier said later that many “engineers at the time didn’t care about a business model. They wanted to work on Red Hat Linux. We had some level of turmoil inside the company with going to this new model. Some engineers left, but more stayed.”

    Many users didn’t like it one darn bit either. They saw Red Hat as abandoning its first customers. Enterprise customers saw it differently.

    Whitehurst, who took Red Hat’s reins in 2008, said, “Once RHEL was in the market we had to support it full stop to made it truly consumable for the enterprise.” They did so and the rest is history.

    Red Hat grew and grew. In its latest quarter, Red Hat realized $772 million of revenue, which was up 23 percent year over year. Not bad for a company built around an operating system that people back in the day thought of as being only for the lunatic fringe.

    Today, Whitehurst, remarked, “Linux is the default choice for open-source companies and enterprises. Ten years ago people still had doubts about open source. Now it’s the default choice for clouds, AI, and big data.” Indeed, “Are there even any important big data or AI projects that aren’t build on open source?” he asked.

    The answer, by the by, is no.

    It’s not just Red Hat, it’s all of Linux and open-source. “At a Red Hat development site,” Whitehurst said, “an engineer asked me about Microsoft competing with open source.” Whitehurst replied: “Microsoft is not the issue, Windows is a competitor to Linux and we’d love to kill it, but the largest enterprise software company in the world is pro-open source and that’s good for all of us.”

    While Red Hat makes the bulk of its money from Linux, Red Hat is no longer just a Linux company. Its eyes are now set on the cloud. Red Hat is determined to use OpenStack to gain a place as big in clouds as the role it already has in Linux.

    Red Hat realizes it’s not just the cloud. The company is also heavily invested in containers and container management. Nothing shows that more than its recent acquistion of CoreOS, a leading Kubernetes company.

    Linux brought Red Hat where it is today. Moving towards tomorrow it will use open-source software to use the cloud, containers, and container orchestration to rise even further in its next 25 years.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月21日:开源日报第44期

    21 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《用Vue.js开发微信小程序——mpvue》
    推荐理由:mpvue是一个使用 Vue.js 开发小程序的前端框架。框架基于 Vue.js 核心,mpvue 修改了 Vue.js 的 runtime 和 compiler 实现,使其可以运行在小程序环境中,从而为小程序开发引入了整套 Vue.js 开发体验。

    该框架有以下优点:

    1.彻底的组件化开发能力:提高代码复用性
    2.完整的 Vue.js 开发体验
    3.方便的 Vuex 数据管理方案:方便构建复杂应用
    4.快捷的 webpack 构建机制:自定义构建策略、开发阶段 hotReload
    5.支持使用 npm 外部依赖
    6.使用 Vue.js 命令行工具 vue-cli 快速初始化项目
    7.H5 代码转换编译成小程序目标代码的能力

    安装

    首先,你需要已经安装微信开发者工具,链接:

    https://mp.weixin.qq.com/debug/wxadoc/dev/devtools/download.html

    其次,你需要安装node.js:

    https://nodejs.org/

    然后你应该能愉快的开始玩耍了

    1.切换源

    npm set registry https://registry.npm.taobao.org/
    

    2. 全局安装vue-cli(vue 官方的一个命令行工具)

    npm install --global vue-cli
    

    3.创建新项目

    vue init mpvue/mpvue-quickstart test
    

    4.进入文件夹,安装依赖

    cd testnpm installnpm run dev
    

    成功的话,在test目录下应该能看见一个dist文件夹

    当然,以上是快速搭建的方法,想要自行配置的同学请自行参考:

    http://mpvue.com/build/#_2

    使用

    打开你的

    然后进入

    目录请选择之前提到的那个dist文件夹

    AppID影响能否真机调试

    确定后,便能进入如下界面:

    然后轻松愉快的用自己的方式打开src下的代码进行更改就ok了

    mpvue是一个不错的框架,不论你是想玩一玩(例如我)还是进行开发都是不错的选择,vue的学习也不难,预测该项目下周还会在周榜上(笑)

    你可能会用到的工具:

    mpvue-loader 提供 webpack 版本的加载器:http://mpvue.com/build/mpvue-loader
    mpvue-webpack-target webpack 构建目标:http://mpvue.com/build/mpvue-webpack-target
    postcss-mpvue-wxss 样式代码转换预处理工具:http://mpvue.com/build/postcss-mpvue-wxss
    px2rpx-loader 样式转化插件:http://mpvue.com/build/px2rpx-loader
    mpvue-quickstart mpvue-quickstart:http://mpvue.com/mpvue/quickstart
    mpvue-simple 辅助 mpvue 快速开发 Page / Component 级小程序页面的工具http://mpvue.com/mpvue/simple

    关于webpack

    该项目还十分贴心的配了一张图:

    和webpack的文档链接:https://doc.webpack-china.org/

    毕竟我也不了解,只能做到这了(笑)

    作者背景简介

    美团点评,隶属于一家大型(国内顶尖)的外卖——美团外卖。


    今日推荐英文原文:《3 tips for organizing your open source project’s workflow on GitHu》作者:Justin W. Flory

    原文链接:https://opensource.com/article/18/4/keep-your-project-organized-git-repo

    推荐理由:怎么能更好地在 GitHub 上组织你的开源项目的工作流程呢?这里有3个tips。

    3 tips for organizing your open source project’s workflow on GitHub

    document sending
    Image by : opensource.com

    Managing an open source project is challenging work, and the challenges grow as a project grows. Eventually, a project may need to meet different requirements and span multiple repositories. These problems aren’t technical, but they are important to solve to scale a technical project. Business process management methodologies such as agile and kanban bring a method to the madness. Developers and managers can make realistic decisions for estimating deadlines and team bandwidth with an organized development focus.

    At the UNICEF Office of Innovation, we use GitHub project boards to organize development on the MagicBox project. MagicBox is a full-stack application and open source platform to serve and visualize data for decision-making in humanitarian crises and emergencies. The project spans multiple GitHub repositories and works with multiple developers. With GitHub project boards, we organize our work across multiple repositories to better understand development focus and team bandwidth.

    Here are three tips from the UNICEF Office of Innovation on how to organize your open source projects with the built-in project boards on GitHub.

    1. Bring development discussion to issues and pull requests

    Transparency is a critical part of an open source community. When mapping out new features or milestones for a project, the community needs to see and understand a decision or why a specific direction was chosen. Filing new GitHub issues for features and milestones is an easy way for someone to follow the project direction. GitHub issues and pull requests are the cards (or building blocks) of project boards. To be successful with GitHub project boards, you need to use issues and pull requests.

    github-open-issues.png

    GitHub issues for magicbox-maps, MagicBox's front-end application

    GitHub issues for magicbox-maps, MagicBox’s front-end application.

    The UNICEF MagicBox team uses GitHub issues to track ongoing development milestones and other tasks to revisit. The team files new GitHub issues for development goals, feature requests, or bugs. These goals or features may come from external stakeholders or the community. We also use the issues as a place for discussion on those tasks. This makes it easy to cross-reference in the future and visualize upcoming work on one of our projects.

    Once you begin using GitHub issues and pull requests as a way of discussing and using your project, organizing with project boards becomes easier.

    2. Set up kanban-style project boards

    GitHub issues and pull requests are the first step. After you begin using them, it may become harder to visualize what work is in progress and what work is yet to begin. GitHub’s project boards give you a platform to visualize and organize cards into different columns.

    There are two types of project boards available:

    • Repository: Boards for use in a single repository
    • Organization: Boards for use in a GitHub organization across multiple repositories (but private to organization members)

    The choice you make depends on the structure and size of your projects. The UNICEF MagicBox team uses boards for development and documentation at the organization level, and then repository-specific boards for focused work (like our community management board).

    Creating your first board

    Project boards are found on your GitHub organization page or on a specific repository. You will see the Projects tab in the same row as Issues and Pull requests. From the page, you’ll see a green button to create a new project.

    There, you can set a name and description for the project. You can also choose templates to set up basic columns and sorting for your board. Currently, the only options are for kanban-style boards.

    github-project-boards-create-board.png

    Creating a new GitHub project board.

    Creating a new GitHub project board.

    After creating the project board, you can make adjustments to it as needed. You can create new columns, set up automation, and add pre-existing GitHub issues and pull requests to the project board.

    You may notice new options for the metadata in each GitHub issue and pull request. Inside of an issue or pull request, you can add it to a project board. If you use automation, it will automatically enter a column you configured.

    3. Build project boards into your workflow

    After you set up a project board and populate it with issues and pull requests, you need to integrate it into your workflow. Project boards are effective only when actively used. The UNICEF MagicBox team uses the project boards as a way to track our progress as a team, update external stakeholders on development, and estimate team bandwidth for reaching our milestones.

    github-issues-metadata.png

    Tracking progress

    Tracking progress with GitHub project boards.

    If you are an open source project and community, consider using the project boards for development-focused meetings. It also helps remind you and other core contributors to spend five minutes each day updating progress as needed. If you’re at a company using GitHub to do open source work, consider using project boards to update other team members and encourage participation inside of GitHub issues and pull requests.

    Once you begin using the project board, yours may look like this:

    github-project-boards-overview.png

    Development progress board

    Development progress board for all UNICEF MagicBox repositories in organization-wide GitHub project boards.

    Open alternatives

    GitHub project boards require your project to be on GitHub to take advantage of this functionality. While GitHub is a popular repository for open source projects, it’s not an open source platform itself. Fortunately, there are open source alternatives to GitHub with tools to replicate the workflow explained above. GitLab Issue Boards and Taiga are good alternatives that offer similar functionality.

    Go forth and organize!

    With these tools, you can bring a method to the madness of organizing your open source project. These three tips for using GitHub project boards encourage transparency in your open source project and make it easier to track progress and milestones in the open.

    Do you use GitHub project boards for your open source project? Have any tips for success that aren’t mentioned in the article? Leave a comment below to share how you make sense of your open source projects.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

←上一页
1 … 248 249 250 251 252 … 262
下一页→

Proudly powered by WordPress