• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 2018年4月8日:开源日报第31期

    8 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《APK传输助手Wifitransfer-master》

    推荐理由:这是一个国人的开源项目,可以方便地通过 WiFi 传输你需要的 apk Android 安装程序包。

    Wifitransfer-master 是什么:

    在网页上通过 WIFI 传输数据提交APK到手机上,然后手机端实现对APK的安装和卸载。

     

    为什么使用 Wifitransfer-master:

    在缺少数据线(如数据线坏了丢了、互相借用数据线发现接口对不上等等)的情况下进行手机和电脑间的APK传输很麻烦,但如果靠 wifi 在电脑和手机间传输APK就很方便。

     

    发展路程( Wifitransfer-master 与 wifi 传书不得不说的故事):

    作者在有这个想法后,发现与另一个网友在博客中实现的功能十分相似( wifi 传书),尤其是Server模块和网页端都已在 wifi 传书中完成。作者就在克隆的基础上做了修改,80%用的是 wifi 传书的东西,但将只能传文档格式改成了只能传APK格式文件,并增加了获取上传文件大小的功能,还改了手机端的显示的样式。

     

    Wifi传书简介:

    参照了《多看》的 wifi 传书功能

    • 手机端的 HttpServer 采用开源项目 AndroidAsync 实现的。
    • 网页端采用 jQuery 实现,文件上传采用 Upload5 (HTML5浏览器)和 js (非 HTML5 浏览器,如 IE7/IE8/IE9 )

     

    使用说明:

    • 打开 APP ,点击界面右下角 WIFI 图标,开启 WLAN 服务,获取 APP 当前网络的 IP 地址和指定端口号
    • 在同一网络下的电脑浏览器上输入给定的地址,得到指定的数据上传页面,点击选择需要上传的文件
    • 上传完成之后(即 apk 文件上传到手机指定的目录),手机上会同步显示你刚刚上传的apk ,然后对刚刚上传的 apk 进行分析(获取名字、大小等信息)。
    • 检查本地是否已经安装了同包名的app,如果安装了则显示卸载按钮,如果没有就只显示安装按钮
    • 点击安装按钮安装,注意适配0,以及点击卸载的时候进行apk的卸载
    • 当然,网页端也可以进行apk的卸载和下载等功能

     

    开源项目精选:APK传输助手Wifitransfer-master

     

    关于 jQuery

    一个易于使用的 js 库,能简化包括遍历 HTML 文档与实现动画等诸多过程,并且具有十分不错的兼容性与可扩展性,如果你是一个使用 js 的程序员,使用它想必能提升你的效率

    官网:https://jquery.com/

    GitHub 链接:https://github.com/jquery/jquery

     

    关于 AndroidAsync

    一个低级的网络协议库,封装了常用的异步请求比如获取字符串、获取 JSON 、获取文件等等,支持缓存,还可以创建 web socket ,功能强大易于使用。

    链接:https://github.com/koush/AndroidAsync

     

    关于 upload5

    一个灵活的 HTML5 /js 库,能够让你同时处理多个文件的上传

    链接:https://github.com/looptribe/upload5

     

    关于 Ajaxupload.js

    一款使用 jquery 上传文件的 js 插件,没什么可说的,只能应付简单的情景(也许?),当你使用的不是 HTML5 浏览器时的替代品(此项目中)

    链接:https://gist.github.com/harpreetsi/3369391

     

    作者相关

    穆仙念 MZCretin

    开源项目精选:APK传输助手Wifitransfer-master

    主要专注于Android开发

    博客:http://blog.csdn.net/u010998327

    Github 个人主页:https://github.com/MZCretin


    今日推荐英文原文:《12 Git tips for Git’s 12th birthday》作者: John SJ Anderson

    原文链接:https://opensource.com/article/18/4/12-git-tips-gits-12th-birthday

    推荐理由:你知道吗? Git 已经 12 岁了,而且,今天就是它 12岁的生日,作为一个程序员每天都打交道的大名鼎鼎的软件,这篇文章精挑细选了12个需要值得提醒的 tips,一起来学习或者复习一下吧。

    12 Git tips for Git’s 12th birthday

    12 Git tips for Git's 12th birthday
    Image by : opensource.com

    Git, the distributed revision-control system that’s become the default tool for source code control in the open source world, turns 12 on April 7. One of the more frustrating things about using Git is how much you need to know to use it effectively. This can also be one of the more awesome things about using Git, because there’s nothing quite like discovering a new tip or trick that can streamline or improve your workflow.

    In honor of Git’s 12th birthday, here are 12 tips and tricks to make your Git experience more useful and powerful, starting with some basics you might have overlooked and scaling up to some real power-user tricks!

    1. Your ~/.gitconfig file

    The first time you tried to use the git command to commit a change to a repository, you might have been greeted with something like this:

    *** Please tell me who you are.
    Run
    git config –global user.email “[email protected]”
    git config –global user.name “Your Name”
    to set your account’s default identity.

    What you might not have realized is that those commands are modifying the contents of ~/.gitconfig, which is where Git stores global configuration options. There are a vast array of things you can do via your ~/.gitconfig file, including defining aliases, turning particular command options on (or off!) on a permanent basis, and modifying aspects of how Git works (e.g., which diff algorithm git diff uses or what type of merge strategy is used by default). You can even conditionally include other config files based on the path to a repository! See man git-config for all the details.

    2. Your repo’s .gitconfig file

    In the previous tip, you may have wondered what that --global flag on the git config command was doing. It tells Git to update the “global” configuration, the one found in ~/.gitconfig. Of course, having a global config also implies a local configuration, and sure enough, if you omit the --global flag, git config will instead update the repository-specific configuration, which is stored in .git/config.

    Options that are set in the .git/config file will override any setting in the ~/.gitconfig file. So, for example, if you need to use a different email address for a particular repository, you can run git config user.email "[email protected]". Then, any commits in that repository will use your other email address. This can be super useful if you work on open source projects from a work laptop and want them to show up with a personal email address while still using your work email for your main Git configuration.

    Pretty much anything you can set in ~/.gitconfig, you can also set in .git/config to make it specific to the given repository. In any of the following tips, when I mention adding something to your ~/.gitconfig, just remember you could also set that option for just one repository by adding it to .git/config instead.

    3. Aliases

    Aliases are another thing you can put in your ~/.gitconfig. These work just like aliases in the command shell—they set up a new command name that can invoke one or more other commands, often with a particular set of options or flags. They’re super useful for longer, more complicated commands you use frequently.

    You can define aliases using the git config command—for example, running git config --global --add alias.st status will make running git st do the same thing as running git status—but I find when defining aliases, it’s frequently easier to just edit the ~/.gitconfig file directly.

    If you choose to go this route, you’ll find that the ~/.gitconfig file is an INI file. INI is basically a key-value file format with particular sections. When adding an alias, you’ll be changing the [alias] section. For example, to define the same git st alias as above, add this to the file:

    [alias]
    st = status

    (If there’s already an [alias] section, just add the second line to that existing section.)

    4. Aliases to shell commands

    Aliases aren’t limited to just running other Git subcommands—you can also define aliases that run other shell commands. This is a fantastic way to deal with a recurring, infrequent, and complicated task: Once you’ve figured out how to do it once, preserve the command under an alias. For example, I have a few repositories where I’ve forked an open source project and made some local modifications that don’t need to be contributed back to the project. I want to keep up-to-date with ongoing development work in the project but also maintain my local changes. To accomplish this, I need to periodically merge the changes from the upstream repo into my fork—which I do by using an alias I call upstream-merge. It’s defined like this:

    upstream-merge
     = !"git fetch origin -v && git fetch upstream -v && git
     merge upstream/master && git push"

    The ! at the beginning of the alias definition tells Git to run the command via the shell. This example involves running a number of git commands, but aliases defined in this way can run any shell command.

    (Note that if you want to copy my upstream-merge alias, you’ll need to make sure you have a Git remote named upstream pointed at the upstream repository you’ve forked from. You can add this by running git remote add upstream <URL to repo>.)

    5. Visualizing the commit graph

    If you work on a project with a lot of branching activity, sometimes it can be difficult to get a handle on all the work that’s happening and how it’s all related. Various GUI tools allow you to get a picture of different branches and commits in what’s called the “commit graph.” For example, here’s a section of one of my repositories visualized with the GitLab commit graph viewer:

    gui_graph.png

    GitLab commit graph viewer

    John Anderson, CC BY

    If you’re a dedicated command-line user or somebody who finds switching tools to be distracting, it’s nice to get a similar view of the commit graph from the command line. That’s where the --graph argument to the git log command comes in:

    console_graph.png

    Repository visualized with --graph command

    John Anderson, CC BY

    This is the same section of the same repo visualized with the following command:

    git 
    log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s 
    %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit 
    --date=relative

    The --graph option adds the graph to the left side of the log, --abbrev-commit shortens the commit SHAs, --date=relative expresses the dates in relative terms, and the --pretty bit handles all the other custom formatting. I have this aliased to git lg, and it is one of my top 10 most frequently run commands.

    6. A nicer force-push

    Sometimes, as hard as you try to avoid it, you’ll find that you need to run git push --force to overwrite the history on a remote copy of your repository. You may have gotten some feedback that caused you to do an interactive rebase, or you may simply have messed up and want to hide the evidence.

    One of the hazards with force pushes happens when somebody else has made changes on top of the same branch in the remote copy of the repository. When you force-push your rewritten history, those commits will be lost. This is where git push --force-with-lease comes in—it will not allow you to force-push if the remote branch has been updated, which will ensure you don’t throw away someone else’s work.

    7. git add -N

    Have you ever used git commit -a to stage and commit all your outstanding changes in a single move, only to discover after you’ve pushed your commit that git commit -a ignores newly added files? You can work around this by using the git add -N (think “notify”) to tell Git about newly added files you’d like to be included in commits before you actually commit them for the first time.

    8. git add -p

    A best practice when using Git is to make sure each commit consists of only a single logical change—whether that’s a fix for a bug or a new feature. Sometimes when you’re working, however, you’ll end up with more than one commit’s worth of change in your repository. How can you manage to divide things up so that each commit contains only the appropriate changes? git add --patch to the rescue!

    This flag will cause the git add command to look at all the changes in your working copy and, for each one, ask if you’d like to stage it to be committed, skip over it, or defer the decision (as well as a few other more powerful options you can see by selecting ? after running the command). git add -p is a fantastic tool for producing well-structured commits.

    9. git checkout -p

    Similar to git add -p, the git checkout command will take a --patch or -p option, which will cause it to present each “hunk” of change in your local working copy and allow you to discard it—basically reverting your local working copy to what was there before your change.

    This is fantastic when, for example, you’ve introduced a bunch of debug logging statements while chasing down a bug. After the bug is fixed, you can first use git checkout -p to remove all the new debug logging, then you git add -p to add the bug fix. Nothing is more satisfying than putting together a beautiful, well-structured commit!

    10. Rebase with command execution

    Some projects have a rule that each commit in the repository must be in a working state—that is, at each commit, it should be possible to compile the code or the test suite should run without failure. This is not too difficult when you’re working on a branch over time, but if you end up needing to rebase for whatever reason, it can be a little tedious to step through each rebased commit to make sure you haven’t accidentally introduced a break.

    Fortunately, git rebase has you covered with the -x or --exec option. git rebase -x <cmd> will run that command after each commit is applied in the rebase. So, for example, if you have a project where npm run tests runs your test suite, git rebase -x npm run tests would run the test suite after each commit was applied during the rebase. This allows you to see if the test suite fails at any of the rebased commits so you can confirm that the test suite is still passing at each commit.

    11. Time-based revision references

    Many Git subcommands take a revision argument to specify what part of the repository to work on. This can be the SHA1 of a particular commit, a branch name, or even a symbolic name like HEAD (which refers to the most recent commit on the currently checked out branch). In addition to these simple forms, you can also append a specific date or time to mean “this reference, at this time.”

    This becomes very useful when you’re dealing with a newly introduced bug and find yourself saying, “I know this worked yesterday! What changed?” Instead of staring at the output of git log trying to figure out what commit was changed when, you can simply run git diff HEAD@{yesterday}, and see all the changes that have happened since then. This also works with longer time periods (e.g., git diff HEAD@{'2 months ago'}) as well as exact dates (e.g., git diff HEAD@{'2010-01-01 12:00:00'}).

    You can also use these date-based revision arguments with any Git subcommand that takes a revision argument. Find full details about which format to use in the man page for gitrevisions.

    12. The all-seeing reflog

    Have you ever rebased away a commit, then discovered there was something in that commit you wanted to keep? You may have thought that information was lost forever and would need to be recreated. But if you committed it in your local working copy, it was added to the reference log (reflog), and you should still be able to access it.

    Running git reflog will show you a list of all the activity for the current branch in your local working copy and also give you the SHA1 of each commit. Once you’ve found the commit you rebased away, you can run git checkout <SHA1> to check out that commit, copy any information you need, and run git checkout HEAD to return to the most recent commit in the branch.

    That’s all folks!

    Hopefully at least one of these tips has taught you something new about Git, a 12-year-old project that’s continuing to innovate and add new features. What’s your favorite Git trick?


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月7日:开源日报第30期

    7 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《强大的Markdown编辑器tui.editor》

    推荐理由:Markdown 是一种可以使用普通文本编辑器编写的标记语言,通过简单的标记语法,它可以使普通文本内容具有一定的格式。它的语法简洁,便于学习,而且功能比纯文本要强,因此广为流传。而 tui.editor 就是一个超级好用的 Markdown 编辑器。

    而 tui.editor 作为一款 Markdown 语法的编辑器,有如下一些特性:

    1. 支持 CommonMark 与 GFM(GitHub Flavored Markdown)两种标准
    2. 支持丰富的扩展插件,如颜色选择器、图表、UML、表格合并
    3. 提供了所见即所得与 Markdown 这两种模式,在编辑过程中可以随时切换,非常方便。在所见即所得模式下,可以直接从浏览器、 Excel、PPT等复制文本,并且保留原来的格式

    这里有一个演示的视频:https://www.youtube.com/watch?v=45a2xSNyHUA&feature=youtu.be

    一些快捷的小窍门

    tui.editor中有一些新颖的技巧可以提升用户编辑时的效率,下面介绍一下这些方便的技巧是如何实现的

    复制和粘贴

    要说到 tui.editor 中的特点,就不能不谈它那可以从浏览器等直接复制粘贴的功能。而要探讨这个功能,就应该先从复制和粘贴的实现方法说起。

    操作系统中一般会有一块成为剪贴板的区域,这块区域专门处理复制粘贴。复制文本时会将文本克隆到剪贴板中,粘贴时再将剪贴板的文本克隆出去。但是在实际应用时,比如将文本复制到 word 文档,并不是单纯的复制文本,而是保留了文本的字号字体等样式,类似这样

    这实质上是一种标记语言,但如果是复制到记事本这类不支持字号字体的应用中时,由于这类应用无法处理这些样式,被复制过去的文本将会变回该应用默认的字体字号。

    回到正题,tui.editor 之所以能够处理复制于 Excel 工作表和浏览器等的文本,正是通过复制这些文本时不同的样式对它们进行相应的处理,从而达到了更加方便快捷的效果。这种效果让我们在从之前已有的基础上开始工作时变得更轻松。

    同步滚动

    tui.editor 还有一点很方便的就是当使用 Markdown 模式时,左右两个窗口可以实现同步滚动,这一点其实可以由 JavaScript 来实现。

    1. 确定左右两个容器元素
    2. 监视鼠标进入某个滚动容器元素的事件,当这个事件发生时,处理进入的容器元素滚动事件的同时,将其 scrollTop 值赋给另一个容器元素(如果直接在其中一个元素触发滚动事件时把它的 scrollTop 值赋给另一个元素,就会造成循环触发的情况,滚动会减缓)

    如果需要适应更复杂一点的状况,例如说左右两个元素内容高度不同时的情况,只需要确定它们两个元素的 scrollTop 之间的比例,然后对上述方法修改即可,基本上还是使用这个方法来实现同步滚动的效果。

    相关链接

    如果你还想更多的了解这款编辑器,可以去它的 GitHub 页面上看一看:

    https://github.com/nhnent/tui.editor

    或者直接去试用一下这款编辑器:https://nhnent.github.io/tui.editor/api/latest/tutorial-example00-demo.html


    今日推荐英文原文:《Submitting my first patch to the Linux kernel》作者: Dileep Sankhla

    原文链接:https://opensource.com/article/18/4/submitting-my-first-patch-linux-kernel

    推荐理由:说真的,在喜欢 Linux 的朋友心中,能向 Linux Kernel 提交代码可真的是一件至高无上的荣誉,这篇文章就是讲述作者的这个经历,来一起看看怎么做到的吧。

    Submitting my first patch to the Linux kernel

    Submitting my first patch to the Linux kernel
    Image by : opensource.com

    I started using Linux three years ago while attending university, and I was fascinated to discover a different desktop environment. My professor introduced me to the Ubuntu operating system, and I decided to dual-boot it along with Windows on my laptop the same day.

    Within three months, I had completely abandoned Windows and shifted to Fedora after hearing about the RPM Package Manager. I also tried running Debian for stability, but in early 2017 I realized Arch Linux suits all my needs for cutting-edge packages. Now I use it along with the KDE desktop and can customize it according to my needs.

    I have always been intrigued by Linux hacks and the terminal instead of using ready-made unified installers. This led me to explore the Linux kernel tree and find a way to contribute to the community.Submitting my first patch to the Linux kernel was a breathtaking experience. I started the second week of February 2018, and it took me about an hour to clone the latest Linux kernel source tree on an 8-Mbps internet connection, and nearly all night to compile and build the 4.15 Arch Linux kernel. I followed the KernelNewbies guide and read the first three chapters of Linux Device Drivers, Third Edition. This book introduced me to the device drivers, along with the specific types, and described how to insert/remove them as a module during the runtime. The sample code in the book helped me create a hello world driver and experiment with the insmod and rmmod commands (the code samples in subsequent chapters are a bit outdated).

    Many people advised me to read books on operating systems and Linux kernel development before contributing; others suggested following the KernelNewbies’ guide and using the bug-finding tools to fix errors. I followed the latter advice because I found exploring and experimenting with the code around errors is the best way to learn and understand the kernel code.

    My first cleanup was removing the “out of memory” warning by running the checkpatch.pl script on the vt6656 driver directory. After adding the changelog and updating the patch, I submitted my first patch on February 10. After I added the changelog, I received an email from Greg Kroah-Hartman on February 12, stating that my patch had been added to the staging-next branch and would be ready to merge in the next major kernel release.

    I recommend keeping your first patch simple; one or two lines will inspire you to contribute more. Keep in mind that quality, not quantity, is what matters. Before contributing to the TODO list of the drivers, you should acquire extensive knowledge of device drivers and the operating system. The thrill of contributing will keep you going.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月6日:开源日报第29期

    6 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《可视化的面向对象语言——Luna》

    推荐理由:Luna 语言是一个尚待完善的面向对象语言,它最大的特点是尝试用可视化的方式将程序的进程表现出来。在最常见的情况下,每个节点对应一行代码。

    例如下图

     

    开源项目精选:可视化的面向对象语言——Luna

     

    它的对应Luna代码是

    a = 1
    b = 2
    c = a + b
    c.succ

     

    从这里我们可以看出,节点实际上将一行代码分为两部分:等号左边和等号右边。等号左边是要处理的变量名,右边则是对于表达式。

    最左边的两个节点对应于行a = 1和b = 2。变量名称a,b成为其相应节点的名称,数字1和2它们的定义。他们没有输入端口和每个输出端口。

    中间的节点对应于该行c = a + b。它具有a和b连接到它的输入节点。由此,可以清楚地看到输入数据来自哪里。

    最右边的节点对应于该行c.succ。该节点没有名称,因为相应的行没有定义任何变量。它有c连接到它的self端口的节点。这个端口表示方法调用的目标。

    我们可以看到,链接各个节点的线条颜色是黄色,这代表的是数据的类型。黄色是整型,橙色是浮点型( Luna 中被称为real),紫色是字符串型,列表(数组)是蓝色,含有多种数据类型的列表是绿色。每个节点的左端代表输入,右端代表输出。

    数据类型:

    就 Luna 目前的数据类型及处理来看,想要成为一门合格的面向对象的可视化语言还亟待完善。

    Luna 目前支持三种基本数据类型:整型,真值(即浮点型),字符串,但三者直接的转换如1+1.5这种表达式尚不能被实现,因此在用于数据处理方面的话还较为麻烦。

    关于自定义数据类型:

    与 python 类似, Luna 可以靠类生成一个新对象,以此达到使用自定义数据类型的目的。

    对象在 Luna 里具有不可变性。也就是说在其他语言里,你可能会使用counter.count += 1去改变对象。而在 Luna 里,如果你写foo = Circle 15.0, 无论如何使用它,foo将永远是Circle 15.0,除非你重定义foo。

    构造类与方法:

    以下是构造类和方法的文本说明:

    class Shape:
    Circle:
    radius :: Real
    Rectangle:
    width  :: Real
    height :: Real
    
    def perimeter: case self of
    Circle r: 2.0 * pi * r
    Rectangle w h: 2.0 * w + 2.0 * h
    
    def area: case self of
    Circle r: pi * r * r
    Rectangle w h: w * h

     

    总结:

    主要优势:

    作为一款新型的可视化编程语言, Luna 更关注于数据的处理,这可以使任何需要使用计算机辅助进行数据处理的人都能快速的使用 Luna 进行编程。

    主要缺陷:

    目前的 Luna 在类和对象方面还有所欠缺。目前,没有办法使用可视化编辑器定义类和方法。

    个人感觉 Luna 为了图形化程序流程在代码的灵活性上作出了很大的妥协,以至于cos=sin=5这样的连等表达式都无法一行写出,这对笔者这种喜爱语法糖的人来说还是比较不友好的QAQ,不过 Luna 本身侧重的就是那些有数据处理需求而又不太对编程或者说语法感冒的设计者,有兴趣的同学还是可以去 GitHub 上学习一下的。

    此外:Luna的基础语法以及操作都在官方文档里有详细的解释

    https://luna-lang.gitbooks.io/docs/content/visual_representation.html


    今日推荐英文原文:《Who really owns an open project?》作者: Heidi Hess von Ludewig

    原文链接:https://opensource.com/open-organization/18/4/rethinking-ownership-across-organization

    推荐理由:很多人不太了解开源社区是如何运作的,比如说,一个开源项目真正意义上属于谁?一个典型例子是: Linus 是 Linux kernel 的作者,然后将其开源,然后更多的人加入进来这个项目贡献代码,还有做非代码工作的志愿者,还有捐赠者,那么谁可以“拥有”这个开源项目呢?这个项目到底属于谁呢?

    老实说,这个问题挺复杂的,在探讨这个问题之前,我们还得定义:什么叫“拥有一个项目”,是知识产权?还是可以决定它的发展,还是创造了这个项目,还是可以毁灭这个项目,才叫“拥有”呢?

    Who really owns an open project?

    chain image
    Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

    Differences in organizational design don’t necessarily make some organizations better than others—just better suited to different purposes. Any style of organization must account for its models of ownership (the way tasks get delegated, assumed, executed) and responsibility (the way accountability for those tasks gets distributed and enforced). Conventional organizations and open organizations treat these issues differently, however, and those difference can be jarring for anyone hopping transitioning from one organizational model to another. But transitions are ripe for stumbling over—oops, I mean, learning from.

    Let’s do that.

    Ownership explained

    In most organizations (and according to typical project management standards), work on projects proceeds in five phases:

    • Initiation: Assess project feasibility, identify deliverables and stakeholders, assess benefits
    • Planning (Design): Craft project requirements, scope, and schedule; develop communication and quality plans
    • Executing: Manage task execution, implement plans, maintain stakeholder relationships
    • Monitoring/Controlling: Manage project performance, risk, and quality of deliverables
    • Closing: Sign-off on completion requirements, release resources

    The list above is not exhaustive, but I’d like to add one phase that is often overlooked: the “Adoption” phase, frequently needed for strategic projects where a change to the culture or organization is required for “closing” or completion.

    • Adoption: Socializing the work of the project; providing communication, training, or integration into processes and standard workflows.

    Examining project phases is one way contrast the expression of ownership and responsibility in organizations.

    Two models, contrasted

    In my experience, “ownership” in a traditional software organization works like this.

    A manager or senior technical associate initiates a project with senior stakeholders and, with the authority to champion and guide the project, they bestow the project on an associate at some point during the planning and execution stages. Frequently, but not always, the groundwork or fundamental design of the work has already been defined and approved—sometimes even partially solved. Employees are expected to see the project through execution and monitoring to completion.

    Employees cut their teeth on a “starter project,” where they prove their abilities to a management chain (for example, I recall several such starter projects that were already defined by a manager and architect, and I was assigned to help implement them). Employees doing a good job on a project for which they’re responsible get rewarded with additional opportunities, like a coveted assignment, a new project, or increased responsibility.

    An associate acting as “owner” of work is responsible and accountable for that work (if someone, somewhere, doesn’t do their job, then the responsible employee either does the necessary work herself or alerts a manager to the problem.) A sense of ownership begins to feel stable over time: Employees generally work on the same projects, and in the same areas for an extended period. For some employees, it means the development of deep expertise. That’s because the social network has tighter integration between people and the work they do, so moving around and changing roles and projects is rather difficult.

    This process works differently in an open organization.

    Associates continually define the parameters of responsibility and ownership in an open organization—typically in light of their interests and passions. Associates have more agency to perform all the stages of the project themselves, rather than have pre-defined projects assigned to them. This places additional emphasis on leadership skills in an open organization, because the process is less about one group of people making decisions for others, and more about how an associate manages responsibilities and ownership (whether or not they roughly follow the project phases while being inclusive, adaptable, and community-focused, for example).

    Being responsible for all project phases can make ownership feel more risky for associates in an open organization. Proposing a new project, designing it, and leading its implementation takes initiative and courage—especially when none of this is pre-defined by leadership. It’s important to get continuous buy-in, which comes with questions, criticisms, and resistance not only from leaders but also from peers. By default, in open organizations this makes associates leaders; they do much the same work that higher-level leaders do in conventional organizations. And incidentally, this is why Jim Whitehurst, in The Open Organization, cautions us about the full power of “transparency” and the trickiness of getting people’s real opinions and thoughts whether we like them or not. The risk is not as high in a traditional organization, because in those organizations leaders manage some of it by shielding associates from heady discussions that arise.

    The reward in an Open Organization is more opportunity—offers of new roles, promotions, raises, etc., much like in a conventional organization. Yet in the case of open organizations, associates have developed reputations of excellence based on their own initiatives, rather than on pre-sanctioned opportunities from leadership.

    Thinking about adoption

    Any discussion of ownership and responsibility involves addressing the issue of buy-in, because owning a project means we are accountable to our sponsors and users—our stakeholders. We need our stakeholders to buy-into our idea and direction, or we need users to adopt an innovation we’ve created with our stakeholders. Achieving buy-in for ideas and work is important in each type of organization, and it’s difficult in both traditional and open systems—but for different reasons.

    Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone.

    Penetrating a traditional organization’s closely knit social ties can be difficult, and it takes time. In such “command-and-control” environments, one would think that employees are simply “forced” to do whatever leaders want them to do. In some cases that’s true (e.g., a travel reimbursement system). However, with more innovative programs, this may not be the case; the adoption of a program, tool, or process can be difficult to achieve by fiat, just like in an open organization. And yet these organizations tend to reduce redundancies of work and effort, because “ownership” here involves leaders exerting responsibility over clearly defined “domains” (and because those domains don’t change frequently, knowing “who’s who”—who’s in charge, who to contact with a request or inquiry or idea—can be easier).

    Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone. Points of contact and sources of immediate support can be less obvious, and this means achieving ownership of a project or acquiring new responsibility takes more time. And even then someone’s idea may never get adopted. A project’s owner can change—and the idea of “ownership” itself is more flexible. Ideas that don’t get adopted can even be abandoned, leaving a great idea unimplemented or incomplete. Because any associate can “own” an idea in an open organization, these organizations tend to exhibit more redundancy. (Some people immediately think this means “wasted effort,” but I think it can augment the implementation and adoption of innovative solutions. By comparing these organizations, we can also see why Jim Whitehurst calls this kind of culture “chaotic” in The Open Organization).

    Two models of ownership

    In my experience, I’ve seen very clear differences between conventional and open organizations when it comes to the issues of ownership and responsibility.

    In an traditional organization:

    • I couldn’t “own” things as easily
    • I felt frustrated, wanting to take initiative and always needing permission
    • I could more easily see who was responsible because stakeholder responsibility was more clearly sanctioned and defined
    • I could more easily “find” people, because the organizational network was more fixed and stable
    • I more clearly saw what needed to happen (because leadership was more involved in telling me).

    Over time, I’ve learned the following about ownership and responsibility in an open organization:

    • People can feel good about what they are doing because the structure rewards behavior that’s more self-driven
    • Responsibility is less clear, especially in situations where there’s no leader
    • In cases where open organizations have “shared responsibility,” there is the possibility that no one in the group identified with being responsible; often there is lack of role clarity (“who should own this?”)
    • More people participate
    • Someone’s leadership skills must be stronger because everyone is “on their own”; you are the leader.

    Making it work

    On the subject of ownership, each type of organization can learn from the other. The important thing to remember here: Don’t make changes to one open or conventional value without considering all the values in both organizations.

    Sound confusing? Maybe these tips will help.

    If you’re a more conventional organization trying to act more openly:

    • Allow associates to take ownership out of passion or interest that align with the strategic goals of the organization. This enactment of meritocracy can help them build a reputation for excellence and execution.
    • But don’t be afraid sprinkle in a bit of “high-level perspective” in the spirit of transparency; that is, an associate should clearly communicate plans to their leadership, so the initiative doesn’t create irrelevant or unneeded projects.
    • Involving an entire community (as when, for example, the associate gathers feedback from multiple stakeholders and user groups) aids buy-in and creates beneficial feedback from the diversity of perspectives, and this helps direct the work.
    • Exploring the work with the community doesn’t mean having to come to consensus with thousands of people. Use the Open Decision Framework to set limits and be transparent about what those limits are so that feedback and participation is organized ad boundaries are understood.

    If you’re already an open organization, then you should remember:

    • Although associates initiate projects from “the bottom up,” leadership needs to be involved to provide guidance, input to the vision, and circulate centralized knowledge about ownership and responsibility creating a synchronicity of engagement that is transparent to the community.
    • Ownership creates responsibility, and the definition and degree of these should be something both associates and leaders agree upon, increasing the transparency of expectations and accountability during the project. Don’t make this a matter of oversight or babysitting, but rather a collaboration where both parties give and take—associates initiate, leaders guide; associates own, leaders support.

    Leadership education and mentorship, as it pertains to a particular organization, needs to be available to proactive associates, especially since there is often a huge difference between supporting individual contributors and guiding and coordinating a multiplicity of contributions.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年4月5日:开源日报第28期

    5 4 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《Screenshot-to-code-in-Keras 机器学习生成前端代码》

    推荐理由:

    项目介绍

    编程实现网页设计的原型一直是一样费时费力的活,工程师往往还需要快速实现并保持还原度。实现静态网页的活如果交由机器自动去处理既能提高效率,将工程师从繁重的劳动中解脱,又能保证还原度。Screenshot-to-code-in-Keras 这个项目就是希望通过深度学习改变前端开发。它将会加快原型设计速度,拉低开发软件的门槛。

    目前该项目致力于实现静态原型的还原,对于网页特效等处理还是需要人为进行干预,前端工程师还没有要到下岗的地步啊~如果类似项目继续发展,是否能够做到特效自动实现,业务逻辑自动实现呢,我们拭目以待。到了那一天,前端工程师甚至可能大部分软件工程师都要下岗了,全部代码实现都自动化处理,虽然可能会让工程师掉饭碗,但这又何不是每个工程师向往的那一天呢。

    实践

    使用相关可以直接看项目 https://github.com/emilwallner/Screenshot-to-code-in-Keras 的 readme 介绍。

    《Turning design mockups into code with deep learning》这篇文章将为我们介绍如何通过深度学习实现前端编程的智能化,相应的译文可见戳我

    小编抱着初学者尝试的心态,按照相关介绍进行了实践。小编使用的是 mac osx 环境。

    首先需要安装有 Python 2.7 和 pip

    然后安装项目依赖的一些模块,建议以用户权限安装

    pip install keras --user -U
    pip install tensorflow --user -U
    pip install pillow --user -U
    pip install h5py --user -U
    pip install jupyter --user -U

    然后将项目 clone 到本地,如果你不想在本地跑(因为确实很麻烦),你可以使用 floydhub.com 的相关服务,项目上有相关的介绍。

    git clone https://github.com/emilwallner/Screenshot-to-code-in-Keras
    cd Screenshot-to-code-in-Keras/local
    jupyter notebook

    启动之后会打开 jupyter 的页面

    运行 Bootstrap 和 Hello_world 两项,如下图,如果在本地跑小心电脑风扇烧起来,下载资源和训练十分耗时耗资源

    之后运行 HTML 项目能达到此效果

    如果您对深度学习方面或者对改变前端开发工作模式方面十分有兴趣不妨可以研究尝试一下。

    三年之后前端工程师们要下岗了么?嗯哼,我猜应该不会。

    作者介绍

    Emil Wallner

    twitter:https://twitter.com/EmilWallner
    生活在巴黎,是一个深度学习方面富有经验和想象力的人,通过此项目预言在未来三年内,深度学习将改变前端开发。


    今日推荐英文原文:《10 commands every Linux user should know》原作者:Sam Bocetta

    原文链接:https://opensource.com/article/18/4/10-commands-new-linux-users

    推荐理由:Linux 操作的精髓尽在命令行界面,很多乐趣也来自于这种传统的交互,各种命令很多,但是掌握这常见的10个命令就能应付绝大多数场合。

     

    10 commands every Linux user should know

    Terminal

    Image by : Jamie Cox. Modified by Opensource.com. CC BY 2.0.

    You may think you’re new to Linux, but you’re really not. There are 3.74 billion global internet users, and all of them use Linux in some way since Linux servers power 90% of the internet. Most modern routers run Linux or Unix, and the TOP500 supercomputers also rely on Linux. If you own an Android smartphone, your operating system is constructed from the Linux kernel.

    In other words, Linux is everywhere.

    But there’s a difference between using Linux-based technologies and using Linux itself. If you’re interested in Linux, but have been using a PC or Mac desktop, you may be wondering what you need to know to use the Linux command line interface (CLI). You’ve come to the right place.The following are the fundamental Linux commands you need to know. Each is simple and easy to commit to memory. In other words, you don’t have to be Bill Gates to understand them.

    1. ls

    You’re probably thinking, “Is what?” No, that wasn’t a typographical error – I really intended to type a lower-case L. ls, or “list,” is the number one command you need to know to use the Linux CLI. This list command functions within the Linux terminal to reveal all the major directories filed under a respective filesystem. For example, this command:

    ls /applications

    shows every folder stored in the applications folder. You’ll use it to view files, folders, and directories.

    All hidden files are viewable by using the command ls -a.

    2. cd

    This command is what you use to go (or “change”) to a directory. It is how you navigate from one folder to another. Say you’re in your Downloads folder, but you want to go to a folder called Gym Playlist. Simply typing cd Gym Playlist won’t work, as the shell won’t recognize it and will report the folder you’re looking for doesn’t exist. To bring up that folder, you’ll need to include a backslash. The command should look like this:

    cd Gym\ Playlist

    To go back from the current folder to the previous one, you can type in the folder name followed by cd ... Think of the two dots like a back button.

    3. mv

    This command transfers a file from one folder to another; mv stands for “move.” You can use this short command like you would drag a file to a folder on a PC.

    For example, if I create a file called testfile to demonstrate all the basic Linux commands, and I want to move it to my Documents folder, I would issue this command:

    mv /home/sam/testfile /home/sam/Documents/

    The first piece of the command (mv) says I want to move a file, the second part (home/sam/testfile) names the file I want to move, and the third part (/home/sam/Documents/) indicates the location where I want the file transferred.

    4. Keyboard shortcuts

    Okay, this is more than one command, but I couldn’t resist including them all here. Why? Because they save time and take the headache out of your experience.

    CTRL+K Cuts text from the cursor until the end of the line

    CTRL+Y Pastes text

    CTRL+E Moves the cursor to the end of the line

    CTRL+A Moves the cursor to the beginning of the line

    ALT+F Jumps forward to the next space

    ALT+B Skips back to the previous space

    ALT+Backspace Deletes the previous word

    CTRL+W Cuts the word behind the cursor

    Shift+Insert Pastes text into the terminal

    Ctrl+D Logs you out

    These commands come in handy in many ways. For example, imagine you misspell a word in your command text:

    sudo apt-get intall programname

    You probably noticed “install” is misspelled, so the command won’t work. But keyboard shortcuts make it easy to go back and fix it. If my cursor is at the end of the line, I can click ALT+B twice to move the cursor to the place noted below with the ^ symbol:

    sudo apt-get^intall programname

    Now, we can quickly add the letter s to fix install. Easy peasy!

    5. mkdir

    This is the command you use to make a directory or a folder in the Linux environment. For example, if you’re big into DIY hacks like I am, you could enter mkdir DIY to make a directory for your DIY projects.

    6. at

    If you want to run a Linux command at a certain time, you can add at to the equation. The syntax is at followed by the date and time you want the command to run. Then the command prompt changes to at> so you can enter the command(s) you want to run at the time you specified above

    For example:

    at 4:08 PM Sat
    at> cowsay 'hello'
    at> CTRL+D

    This will run the program cowsay at 4:08 p.m. on Saturday night.

    7. rmdir

    This command allows you to remove a directory through the Linux CLI. For example:

    rmdir testdirectory

    Bear in mind that this command will not remove a directory that has files inside. This only works when removing empty directories.

    8. rm

    If you want to remove files, the rm command is what you want. It can delete files and directories. To delete a single file, type rm testfile, or to delete a directory and the files inside it, type rm -r.

    9. touch

    The touch command, otherwise known as the “make file command,” allows you to create new, empty files using the Linux CLI. Much like mkdir creates directories, touch creates files. For example, touch testfile will make an empty file named testfile.

    10. locate

    This command is what you use to find a file in a Linux system. Think of it like search in Windows. It’s very useful if you forget where you stored a file or what you named it.

    For example, if you have a document about blockchain use cases, but you can’t think of the title, you can punch in locate -blockchain or you can look for “blockchain use cases” by separating the words with an asterisk or asterisks (*). For example:

    locate -i*blockchain*use*cases*.


    There are tons of other helpful Linux CLI commands, like the pkill command, which is great if you start a shutdown and realize you didn’t mean to. But the 10 simple and useful commands described here are the essentials you need to get started using the Linux command line.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

←上一页
1 … 252 253 254 255 256 … 262
下一页→

Proudly powered by WordPress