• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 2018年3月23日:开源日报第15期

    23 3 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《 简洁即正义——Android-KTX》

    推荐理由:这是一套用于Android应用开发的 Kotlin 扩展。目的就是为了让我们使用 Kotlin 进行简洁、愉悦、惯用地 Android 开发,而不会增加新的功能。

    原理介绍

    透过现象看本质,这样使用起来就不会迷惑,而且遇到问题也能方便排查。

    Android-ktx主要使用Kotlin语言的几个特性,了解了这些特性后,我们自己也能很方便的进行封装,这样就形成了我们自己的类库,便于自己技术的沉淀。

    Extensions

    上面的第一个例子,uri的封装就是利用了这个,Kotlin的官方文档也有介绍。

    直接看源码就行了。

    inline fun String.toUri(): Uri = Uri.parse(this)

    其实就是对String做了一个扩展,如果使用Java的就很容易理解,如下所示,这种方式在日常开发中也很容易见到。

    public class StringUtil{
        public static Uri parse(String uriString) {
            return Uri.parse(uriString);
        }
    }

    Lambdas

    第二个例子主要使用了Lambdas这个特性,Kotlin文档在这里。

    还是贴代码,首先对SharedPreferences做了扩展,然后这个扩展函数的参数是一个闭包,当函数最后一个参数是闭包的时候,函数的括号可以直接省略,然后在后面接上闭包就行了。

    inline fun SharedPreferences.edit(action: SharedPreferences.Editor.() -> Unit) {
        val editor = edit()
        action(editor)
        editor.apply()
    }

    Default Arguments

    这个特性上面的例子没有,可以单独列举,如下所示。(官方文档介绍)

    也是就说,当一个函数中含有多个参数时候,不需要像Java中那样,依次赋值,可以仅仅赋需要的即可,Java中常见的解决的方法是方法重载,挨个传入默认值。

    class ViewTest {
        private val context = InstrumentationRegistry.getContext()
        private val view = View(context)
    
        @Test
        fun updatePadding() {
            view.updatePadding(top = 10, right = 20)
            assertEquals(0, view.paddingLeft)
            assertEquals(10, view.paddingTop)
            assertEquals(20, view.paddingRight)
            assertEquals(0, view.paddingBottom)
        }
    }

    看updatePadding方法定义。

    fun View.updatePadding(
        @Px left: Int = paddingLeft,
        @Px top: Int = paddingTop,
        @Px right: Int = paddingRight,
        @Px bottom: Int = paddingBottom
        ) {
        setPadding(left, top, right, bottom)
    }

    对于默认参数,还可以这样玩,比如在Java中,常见的有建造在模式,对每个参数进行赋值,然后创建一个对象,如果使用这种特性,不需要改变的值,可以直接用默认值表示,这样在编码的时候,就会显得很简洁。

    优势:

    作为Android开发语言,kotlin本身就是一门非常简洁的开发语言,而Android-KTX使得kotlin更加的简洁,代码更加的精简。

    我们可以看到官方给出的一组对比:

    Kotlin:

    val uri = Uri.parse(myUriString)

    Kotlin with Android KTX:

    val uri = myUriString.toUri()

    Kotlin:

    sharedPreferences.edit()
        .putBoolean("key", value)
        .apply()

    Kotlin with Android KTX:

    sharedPreferences.edit {
        putBoolean("key", value)
    }

    Kotlin:

    val pathDifference = Path(myPath1).apply {
        op(myPath2, Path.Op.DIFFERENCE)
    }
    
    canvas.apply {
        val checkpoint = save()
        translate(0F, 100F)
        drawPath(pathDifference, myPaint)
        restoreToCount(checkpoint)
    }

    Kotlin with Android KTX:

    val pathDifference = myPath1 - myPath2
    
    canvas.withTranslation(y = 100F) {
        drawPath(pathDifference, myPaint)
    }

    Kotlin:

    view.viewTreeObserver.addOnPreDrawListener(
        object : ViewTreeObserver.OnPreDrawListener {
            override fun onPreDraw(): Boolean {
                viewTreeObserver.removeOnPreDrawListener(this)
                actionToBeTriggered()
                return true
            }
        })

    Kotlin with Android KTX:

    view.doOnPreDraw {
        actionToBeTriggered()
    }

    非常明显的可以看出,Android-KTX使得代码更加的简洁。

    Android-KTX在Android框架和支持库上提供了一个良好的API层,这使得代码更加简洁,Android-KTX中支持的Android框架的部分现在可在GitHub库中找到,同时,Google承诺在即将到来的支持库版本中提供涵盖Android支持库的Android-KTX其他部分。

    注意:

    这个库并不是最终正式版本,可能会有API的规范性改动或者移除,打算用这个来实现一些项目的朋友也注意这一定,毕竟,第一个吃螃蟹的人是有风险的。

    不过谷歌表示,现在的预览版本只是一个开始,在接下来的几个月里,他们会根据开发者的反馈和贡献加入 API 进行迭代,当 API 稳定后,Google 会承诺 API 的兼容性,并计划将 Android KTX 作为 Android 支持库的一部分。

    关于kotlin:

    kotlin是一个由JetBrains开发,基于JVM的编程语言,与Java编程语言100%互通,并具备诸多Java尚不支持的新特性,适用于如今使用Java的所有环境,常用于编写服务器端代码和Android应用。Kotlin还得到了谷歌的支持,已正式成为Android开发一级语言。作为一个简洁的语言,Kotlin支持 lambda,支持闭包,能够减少代码量,提高我们的开发效率。

    进入官网了解更多:kotlin官网


    今日推荐英文原文:《7 Super Useful Aliases to make your development life easier》作者:Al-min Nowshad

    原文链接:https://codeburst.io/7-super-useful-aliases-to-make-your-development-life-easier-fef1ee7f9b73

    7 Super Useful Aliases to make your development life easier

     

    npm install --save express
    sudo apt-get update
    brew cask install docker

    Commands like these are our daily routine. Software Developer, Dev ops, Data Scientists, System Admin or in any other profession, we need to play with a few regular commands again and again.

    It’s tiresome to write these commands every time we need them ?

    Wouldn’t it be better if we could use some kind of shortcuts for these commands?

    Meet Your Friend — Alias

    What if I tell you that you can use

    nis express

    Instead of

    npm install --save express

    Stay with me, we’ll find out how ?

    What’s alias ?

    It’s a text-only interface for your terminal or shell commands that can be mapped with longer and more complex commands under the hood!

    How ?

    Open your terminal and type alias then press Enter

    You’ll see a list of available aliases on your machine.

    If you look closely you’ll find a common pattern to use aliases –

    alias alias_name="command_to_run"

    So, we’re simply mapping commands with names! (almost)

    Let’s Create a few

    NOTE: For the purpose of this tutorial, please keep your terminal open and use one terminal to test these aliases. use cd if you need to change directory.

    1. Install node Packages

    npm install --save packagename ➡ nis packagename
    Type the command below in your terminal and press Enter –

    alias nis="npm install --save "

    Now, we can use nis express to install express inside your node project.

    2. Git add and commit

    git add . && git commit -a -m "your commit message"
    ➡ gac "your commit message"

    alias gac="git add . && git commit -a -m "

    3. Search Through Terminal History

    history | grep keyword ➡ hs keyword

    alias hs='history | grep'

    Now, if we need to search thorough our history to find everything with the keyword test we just have to execute hs test to find our expected result.

    4. Make and enter inside a directory

    mkdir -p test && cd test ➡ mkcd test

    alias mkcd='foo(){ mkdir -p "$1"; cd "$1" }; foo '

    5. Show my ip address

    curl http://ipecho.net/plain ➡ myip

    alias myip="curl http://ipecho.net/plain; echo"

    6. Open file with admin access

    sudo vim filename ➡ svim filename

    alias svim='sudo vim'

    7. Update your linux pc/server

    sudo apt-get update && sudo apt-get update ➡ update

    alias update='sudo apt-get update && sudo apt-get upgrade'

    Persistant Aliases

    You’ve learnt how to create aliases. Now it’s time to make them persistent throughout your system. Depending on what type of shell/terminal you’re using you need to copy-paste your aliases inside ~/.bashrc or ~/.bashprofile or ~/.zshrc if you’re using zsh as your default terminal.

    Example:

    As I’m using zsh as my default terminal, I’ve to edit ~/.zshrc file to add my aliases. First of all let’s open it with admin access with sudo vim ~/.zshrc .

    Now, I need to paste my alias/aliases like alias hs='history | grep' then exit with saving by entering :wq inside vim

    Then to take effect I need to execute source ~/.zshrc and restart my terminal. From now on, the hs command will be available throughout my system ?

    Bonus

    oh-my-zsh is a great enhancement for your terminal which comes with some default aliases and beautiful interface.

     


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年3月22日:开源日报第14期

    21 3 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《JS消息对话框插件SweetAlert2》
    推荐理由:SweetAlert2是一款功能强大的纯Js模态消息对话框插件。SweetAlert2用于替代浏览器默认的弹出对话框,它提供各种参数和方法,支持嵌入图片,背景,HTML标签等,并提供5种内置的情景类,功能非常强大。

    简介

    SweetAlert2是SweetAlert-js的升级版本,它解决了SweetAlert-js中不能嵌入HTML标签的问题,并对弹出对话框进行了优化,同时提供对各种表单元素的支持,还增加了5种情景模式的模态对话框。

    安装

    可以通过bower或npm来安装sweetalert2对话框插件。

    • bower install sweetalert2
    • npm install sweetalert2

    使用

    使用SweetAlert2对话框需要在页面中引入sweetalert2.min.css和sweetalert2.min.js文件,为了兼容IE浏览器,还需要引入promise.min.js文件。

    <link rel="stylesheet" type="text/css" href="path/to/sweetalert2/dist/sweetalert2.min.css">
    <script src="path/to/sweetalert2/dist/sweetalert2.min.js"></script>
    <!-- for IE support -->
    <script src="path/to/es6-promise/promise.min.js"></script>

    基本使用:

    基本的使用方法是使用通过swal()来弹出一个对话框。

    swal('hello world!');

    如果要弹出一个带情景模式的对话框,情景模式类型可以如下在第三个参数中设置。

    swal('Oops...', 'Something went wrong!', 'error');

    swal(…)会返回一个Promise<boolean>对象,该Promise对象中then方法中的isConfirm参数的含义如下:

    • true:代表Confirm(确认)按钮。
    • false:代表Cancel(取消)按钮。
    • undefined:代表按下Esc键,点击取消按钮或在对话框之外点击。

    模型对话框的类型

    sweetalert2提供了5种情景模式的对话框。

    效果演示

    嵌入函数的弹窗

    链式弹窗示例

    通过SweetAlert2还可以实现各种效果,如定时关闭,自定义弹窗的大小,背景,加入动画等等,更多的效果演示和具体操作可以访问SweetAlert2的官网

    链接:https://sweetalert2.github.io/

    浏览器兼容性

    IE11 * Edge chrome 火狐 Safari 欧朋 Android浏览器* UC浏览器*
    ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅

     

    *应包括ES6 Promise polyfill,请参阅使用示例。

    需要注意的是SweetAlert2 不且将不提供任何形式的IE10和更低的支持或功能。

    相关的github项目

    • avil13 / vue-sweetalert2 – Vue.js包装
    • softon / sweetalert – Laravel 5 Package
    • alex-shamshurin / sweetalert2-react – 反应组件

     


    今日推荐英文原文:《A brief introduction to two data processing architectures — Lambda and Kappa for Big Data》

    原文链接:https://towardsdatascience.com/a-brief-introduction-to-two-data-processing-architectures-lambda-and-kappa-for-big-data-4f35c28005bb

    原文作者:

    A brief introduction to two data processing architectures — Lambda and Kappa for Big Data

    Big Data, Internet of things (IoT), Machine learning models and various other modern systems are becoming an inevitable reality today. People from all walks of life have started to interact with data storages and servers as a part of their daily routine. Therefore we can say that dealing with big data in the best possible manner is becoming the main area of interest for businesses, scientists and individuals. For instance an application launched for achieving certain business goals will be more successful if it can efficiently handle the queries made by customers and serve their purpose well. Such applications need to interact with data storage and in this article we’ll try to explore two important data processing architectures that serve as the backbone of various enterprise applications known as Lambda and Kappa.

    The rapid growth of social media applications, cloud based systems, Internet of things and an unending spree of innovations has made it important for a developer or a data scientist to take well calculated decisions while launching, upgrading or troubleshooting an enterprise application. Although it has been widely accepted and understood that using a modular approach to build an application has multiple advantages and long term benefits, the pursuit for selecting the right data processing architecture still keeps putting question marks in front of many proposals related to existing and upcoming enterprise software. Although there are various data processing architectures being followed around the globe these days let’s investigate the Lambda and Kappa architectures in detail and find out what makes each of them special and in what circumstances one should be preferred over another.

    Lambda Architecture

    Lambda architecture is a data processing technique that is capable of dealing with huge amount of data in an efficient manner. The efficiency of this architecture becomes evident in the form of increased throughput, reduced latency and negligible errors. While we mention data processing we basically use this term to represent high throughput, low latency and aiming for near-real-time applications. Which also would allow the developers to define delta rules in the form of code logic or natural language processing (NLP) in event-based data processing models to achieve robustness, automation and efficiency and improve the data quality. Moreover, any change in the state of data is an event to the system and as a matter of fact it is possible to give a command, queried or expected to carry out delta procedures as a response to the events on the fly.

    Event sourcing is a concept of using the events to make prediction as well as storing the changes in a system on the real time basis a change of state of a system, an update in the databases or an event can be understood as a change. For instance if someone interact with a web page or a social network profile, the events like page view, likes or Add as a Friend request etc… are triggering events that can be processed or enriched and the data stored in a database.

    Data processing deals with the event streams and most of the enterprise software that follow the Domain Driven Design use the stream processing method to predict updates for the basic model and store the distinct events that serve as a source for predictions in a live data system. To handle numerous events occurring in a system or delta processing, Lambda architecture enabling data processing by introducing three distinct layers. Lambda architecture comprises of Batch Layer, Speed Layer (also known as Stream layer) and Serving Layer.

    1. Batch layer

    New data keeps coming as a feed to the data system. At every instance it is fed to the batch layer and speed layer simultaneously. Any new data stream that comes to batch layer of the data system is computed and processed on top of a Data Lake. When data gets stored in the data lake using databases such as in memory databases or long term persistent one like NoSQL based storages batch layer uses it to process the data using MapReduce or utilizing machine-learning (ML) to make predictions for the upcoming batch views.

    2. Speed Layer (Stream Layer)

    The speed layer uses the fruit of event sourcing done at the batch layer. The data streams processed in the batch layer result in updating delta process or MapReduce or machine learning model which is further used by the stream layer to process the new data fed to it. Speed layer provides the outputs on the basis enrichment process and supports the serving layer to reduce the latency in responding the queries. As obvious from its name the speed layer has low latency because it deals with the real time data only and has less computational load.

    3. Serving Layer

    The outputs from batch layer in the form of batch views and from speed layer in the form of near-real time views are forwarded to the serving layer which uses this data to cater the pending queries on ad-hoc basis.

    Here is a basic diagram of what Lambda Architecture model would look like:

    Let’s translate that to a functional equation which defines any query in big data domain. The symbols used in this equation are known as Lambda and the name for the Lambda architecture is also coined from the same equation. This function is widely known to those who are familiar with tidbits of big data analysis.

    Query = λ (Complete data) = λ (live streaming data) * λ (Stored data)

    The equation means that all the data related queries can be catered in the Lambda architecture by combining the results from historical storage in the form of batches and live streaming with the help of speed layer.

    Applications of Lambda Architecture

    Lambda architecture can be deployed for those data processing enterprise models where:

    • User queries are required to be served on ad-hoc basis using the immutable data storage.
    • Quick responses are required and system should be capable of handling various updates in the form of new data streams.
    • None of the stored records shall be erased and it should allow addition of updates and new data to the database.

    Lambda architecture can be considered as near real-time data processing architecture. As mentioned above, it can withstand the faults as well as allows scalability. It uses the functions of batch layer and stream layer and keeps adding new data to the main storage while ensuring that the existing data will remain intact. Companies like Twitter, Netflix, and Yahoo are using this architecture to meet the quality of service standards.

    Pros and Cons of Lambda Architecture

    Pros

    • Batch layer of Lambda architecture manages historical data with the fault tolerant distributed storage which ensures low possibility of errors even if the system crashes.
    • It is a good balance of speed and reliability.
    • Fault tolerant and scalable architecture for data processing.

    Cons

    • It can result in coding overhead due to involvement of comprehensive processing.
    • Re-processes every batch cycle which is not beneficial in certain scenarios.
    • A data modeled with Lambda architecture is difficult to migrate or reorganize.

    Kappa Architecture

    In 2014 Jay Kreps started a discussion where he pointed out some discrepancies of Lambda architecture that further led the big data world to another alternate architecture that used less code resource and was capable of performing well in certain enterprise scenarios where using multi layered Lambda architecture seemed like extravagance.

    Kappa Architecture cannot be taken as a substitute of Lambda architecture on the contrary it should be seen as an alternative to be used in those circumstances where active performance of batch layer is not necessary for meeting the standard quality of service. This architecture finds its applications in real-time processing of distinct events. Here is a basic diagram for the Kappa architecture that shows two layers system of operation for this data processing architecture.

    Kappa Architecture

    Let’s translate the operational sequencing of the kappa architecture to a functional equation which defines any query in big data domain.

    Query = K (New Data) = K (Live streaming data)

    The equation means that all the queries can be catered by applying kappa function to the live streams of data at the speed layer. It also signifies that that the stream processing occurs on the speed layer in kappa architecture.

    Applications of Kappa architecture

    Some variants of social network applications, devices connected to a cloud based monitoring system, Internet of things (IoT) use an optimized version of Lambda architecture which mainly uses the services of speed layer combined with streaming layer to process the data over the data lake.

    Kappa architecture can be deployed for those data processing enterprise models where:

    • Multiple data events or queries are logged in a queue to be catered against a distributed file system storage or history.
    • The order of the events and queries is not predetermined. Stream processing platforms can interact with database at any time.
    • It is resilient and highly available as handling Terabytes of storage is required for each node of the system to support replication.

    The above mentioned data scenarios are handled by exhausting Apache Kafka which is extremely fast, fault tolerant and horizontally scalable. It allows a better mechanism for governing the data-streams. A balanced control on the stream processors and databases makes it possible for the applications to perform as per expectations. Kafka retains the ordered data for longer durations and caters the analogous queries by linking them to the appropriate position of the retained log. LinkedIn and some other applications use this flavor of big data processing and reap the benefit of retaining large amount of data to cater those queries that are mere replica of each other.

    Pros and Cons of Kappa architecture

    Pros

    • Kappa architecture can be used to develop data systems that are online learners and therefore don’t need the batch layer.
    • Re-processing is required only when the code changes.
    • It can be deployed with fixed memory.
    • It can be used for horizontally scalable systems.
    • Fewer resources are required as the machine learning is being done on the real time basis.

    Cons

    Absence of batch layer might result in errors during data processing or while updating the database that requires having an exception manager to reprocess the data or reconciliation.

    Conclusion

    In short the choice between Lambda and Kappa architectures seems like a tradeoff. If you seek you’re an architecture that is more reliable in updating the data lake as well as efficient in devising the machine learning models to predict upcoming events in a robust manner you should use the Lambda architecture as it reaps the benefits of batch layer and speed layer to ensure less errors and speed. On the other hand if you want to deploy big data architecture by using less expensive hardware and require it to deal effectively on the basis of unique events occurring on the runtime then select the Kappa architecture for your real-time data processing needs.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年3月21日:开源日报第13期

    21 3 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《Winamp——重生的多媒体播放器》

    Winamp 是一个重生的多媒体播放器,它有着现在主流播放器中的所有功能,同时,还有着它们所没有的功能。

     

    简介

    Winamp 的前身是由 Nullsoft 公司开发,是数字媒体播放的先驱,那时的 Winamp 是作为一个 MP3 播放器,那时的 Winamp 以更好的音质与功能迅速流传开来,但是后来,多媒体技术发展的速度非常快,随着视频、音频的迅猛发展,计算机处理速度大大增强,乃至大容量硬盘、DVD的普及,Winamp 逐渐失去了竞争力,再后来,官方宣布关闭了这个项目。现在,这个项目被重新搬了出来,而且也成功地进行发展,Winamp 也获得了重生。

    再来,关于 Winamp 的多功能,将 Winamp 与现在主流的播放器相比,比如网易云音乐等等,Winamp 还能播放视频,在播放时还附带炫丽的可视化效果,它还拥有一个难以想到的浏览器功能。

    这就是 Winamp,一个重生的多媒体播放器。

    开源项目精选: Winamp——重生的多媒体播放器

    Winamp 与其他播放器的区别:

    我们就以QQ音乐作为例子

    Winamp 与QQ音乐一样,作为一个多媒体音乐播放器,它们都拥有作为一个音乐播放器的基本功能,本地播放,在线播放都能做到。但它们所不同的是QQ音乐就只是一个音乐播放器,而 Winamp 就不仅仅是这样。Winamp 有着QQ音乐许多所没有的功能。比如说可视化,视频播放器等,想不到的是,Winamp 还有浏览器的功能,可以直接在 Winamp 中上网。可以说是功能很全面了。

    再者,Winamp 所支持的格式要多于 QQ 音乐所支持的格式。比起其它的现在主流的音乐播放器,Winamp 所支持的格式也更多。

    关于 Winamp 功能

    音频:该软件支持 MP3、MP2、MOD、S3M、MTM、ULT、XM、IT、669、CD-Audio、Line-In、WAV、VOC等多种音频格式。可以说是涵盖了绝大部分音乐都可以通过其播放。

    视频:该播放器本身并不支持播放 mp4 这类视频(装上插件之后可以),但是可以播放 mp4 里面的音频。

    可视化效果:播放音乐的时候也可以上手视觉盛宴:

    开源项目精选: Winamp——重生的多媒体播放器

     

    皮肤:该播放器可以定制界面,也就是类似于酷狗音乐的皮肤功能。除了可以自定义面板格式,你还可以定制面板颜色:

    开源项目精选: Winamp——重生的多媒体播放器

     

    在线播放:一开始觉得这软件只能播放本地音乐,结果发现它是支持在线播放的。它的在线服务把音乐进行了分类,不同的分类下面又有细分。可以说是比较符合需求,想听啥听啥,在我的印象中,网易云音乐播放器的分类就没有这么细。

    设备管理:这款软件是有手机版的,并且支持无限桌面同步,USB 挂载同步。

     

    开源项目精选: Winamp——重生的多媒体播放器

    插件:Winamp 里面很多很炫的功能都是通过插件来实现的。(支持的插件有60多个)比如 DFX(一种音效处理插件),通过 DFX 默认设置就可以让 Winamp 在音质还原上提升一个档次,不仅如此还可以调节高保真度、环境音效、3D 环绕、动态推进、超重低音等等。即使使用主板集成文件次的声卡仍然可以播放出专业般的音质效果,令人欣喜的是使用DFX 不会占用太大的系统资源。

    Winamp 的均衡器

    Winamp 内置了一个均衡器,通过均衡器,你可以自己调节不同频率段的音调,也可以参考它已经内置好的频率组合。同时这个均衡器可以调节左右声道,同时也能实现前置放大器的功能。在网页版的 Winamp 中同样有均衡器的功能。

    前置放大器

    如果把前置放大器调到最大,最显著的感觉就是音量似乎变大了;实际上前置放大器的作用就是放大收到的信号,从而减小外界的干扰。如果把它调到最大之后,一些听不太清楚的声音会变的清楚,但是同时也会有放大噪音的副作用,一般来说默认即可。

    均衡器

    Winamp 中内置的均衡器可以调节 70Hz, 180Hz ,320Hz, 600Hz, 1kHz ,3kHz ,6kHz ,12kHz, 14kHz,16kHz 频率的声音。

    70Hz 属于低音,是声音的基础部分,如果这部分过低会导致声音单薄,如果过高则会让声音听起来发闷,戴耳机时则会感觉到不明来由的震动。

    180Hz 和 320Hz 属于中低音,如果调到最高,则会使低音过分突出,调低则会突出人声。

    600Hz 和 1kHz 属于中音,调到最高会让声音听起来像电话中传来一般,让人觉得有一层薄膜;调低则会让人声减弱。

    3kHz 属于中高音,调高会让人声尖锐,调低则会让人声的音调变化不明显。

    6kHz 属于高音,调到最低会让人感觉到声音明显的低沉,调高则会让人感觉到声音变得清亮。

    12kHz,14kHz,16kHz 属于极高音,调到最高时声音容易让人发昏,调低时则会让声音压抑


    今日推荐英文原文:《Don’t be a Junior Developer》作者:Andrei Neagoie

    Don’t be a Junior Developer

    Seriously, don’t be a junior developer. A junior developer puts this title in their resume, emails, and LinkedIn… They pronounce it to the world. Don’t.

    When you do that, this is what recruiters and companies see: “Hi, I’m desperately looking to get hired as a developer. I’m still new at this, but can you please please please place a bet on me and hope that I turn out to be an asset and not a liability for your company. Oh, and I’m also going to need a lot of help from your staff for the first 6 months!”

    But, I AM a junior developer!… you say. If that is the case, then you will have better long term success if you focus on improving your skills to become an intermediate developer. Only then, you should start applying to jobs. Dedicate yourself full time on learning proper skills. This way, you don’t pigeonhole yourself to the “junior” developer role that you brand yourself as. Remember, first impressions are important. By getting hired as a junior developer, you will have to spend a longer time getting out of that role than if you would have, if you spent a little more time getting comfortable calling yourself an intermediate developer and getting hired into that role right away.

    But when would I know when I’m not a junior developer?…you say. You won’t. You will always feel like you don’t know enough. You will always feel like others are smarter than you. This is called imposter’s syndrome. It’s normal and every developer feels it. But here is a simple test for web developers: Can you explain to your family members how the internet works? How a computer works? How websites work? Do you have a basic understanding of HTML, CSS and Javascript so you can build your own websites? Do you know a little bit of React? Have you built a few projects on your own on Github and you are comfortable putting up websites and apps online? Good, then you are not a junior developer.

    But I need a job right now!..you say. Stop that short term thinking. Unless your job involves you working with really smart people that you can learn from every day, on technologies that are relevant and current (few junior developer roles offer you this), your time would be better invested learning skills to get out of the junior mindset. Long term, you will earn more money, be with better developer teams, and you will be more likely to work for a company that teaches and let’s you work with up to date technologies every day. Don’t work on updating a WordPress plugin as the resident junior developer of a law firm. That won’t help you long term.

    If you apply for junior developer roles, the best case scenario: You become a junior developer.

    If you apply for intermediate developer roles, the best case scenario: You become an intermediate developer.

    Don’t sell yourself short.

    Ok, great pep talk Andrei, but I still have no idea what I’m doing. I’m definitely still a junior developer!…you say. Fair point. I’m currently working on the ultimate resource to get people out of the “junior mindset”. The best way to do that is to understand the whole developer eco system on the web and even the selective knowledge known by only senior developers. This course will include things that nobody teaches you in one go or you can only find fragmented, vague and outdated tutorials online on. Here are the topics I will be teaching:

    • SSH
    • Linux Servers
    • Performance (from minimizing DOM updates to Load Balancing)
    • Security
    • State Management
    • AWS lambda and other server-less architectures
    • Typescript
    • Server Side vs Single Page Applications
    • Testing
    • Docker
    • Sessions with JWT
    • Redis
    • Progressive Web Apps
    • Continuous Integration/ Continuous Delivery
    • (…maaaaybe GraphQL)

    These are the topics that will make sure you are not a junior developer. The course will be focused on connecting the dots on all of these so that next time you are in an interview, you can speak intelligently about current tactics for building projects, architecture, and setting up developer environments. It is the successor to my learn to code in 2018 course.

    Stay tuned for Part 2 of this article where I will go through each one of the above topics in simple terms.

    If you take away one thing from this article…

    Stop calling yourself a junior developer. Have a junior developer mindset where you are constantly looking to learn from others, but never settle for a junior developer role. Apply for roles for which you are under qualified not overqualified. Remember that if you never ask the answer will always be no.

    Don’t overestimate the world and underestimate yourself. You are better than you think.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

  • 2018年3月20日:开源日报第12期

    20 3 月, 2018

    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg


    今日推荐开源项目:《下一代文本编辑器 Xray》

    推荐理由:Xray 是 Atom 团队正在新开发的一个基于 Electron 的文本编辑器,目前并没有开发完成,正处于试验阶段。Atom 团队利用自己之前开发 Atom 的经验,并将自己新的想法应用于 Xray,经过快速迭代不断验证,希望能够开发出一个高性能、高可扩展、高兼容、跨平台、适用于任何 Web 应用的文本编辑器。

    高性能方面:性能被项目团队设为首要特性,项目对于性能的目标还是比较高的,如下图

    开源项目精选:下一代文本编辑器 Xray

    Atom 编辑器推出后性能方面一直被社区和用户诟病,其中在加载大文件的情况下,性能问题尤为明显,因此 Atom 开发团队希望通过 WebGL 将界面这块进行重新实现。但是他们不希望抛弃 Electron,因为他们相信 Electron 还是开发跨平台可扩展界面最优秀的技术平台。

    核心逻辑:Xray 相对于 Atom 最大的变动在于核心逻辑层面将由 Rust 来开发。选择 Rust ,其类型系统、并行能力将会使其整体的架构更轻量,并且可维护性更好。但是 Rust 是一门比较偏冷门的语言,并且目前没有太多的项目对其大规模应用,抛弃 node 使用 Rust,不仅仅对团队整体开发是个挑战,对项目之后面向社区提升了门槛。

    Atom 和 Xray:就现有的发展来看,Xray 还没有到一个成熟项目阶段,官方也说了各种试验验证还在进行中,并且需要在各种试验之后才能安排出整体项目的开发时间表。因此 Xray 短中期内还没有能力达到 Atom 接班人的水平,但有一点是可以认识到的,在 Xray 的相关技术达到成熟验证后,Atom 团队必定会将大部分开发精力投入到 Xray 当中去。

    开源项目精选:下一代文本编辑器 Xray

    从 Xray 的整体架构可以看出与 Atom 还是有比较大的差别,Xray 的定位其实不仅仅是一款简单的编辑器而已,Atom 团队希望将 Xray 打造为一款精致的个人编辑器的同时,还能够成为基于 Github 团队协作的一样强大工具,其架构和设想远大。

    关于 Xray 的讨论

    编辑器这方面从来不缺话题,编辑器是和广大开发同胞息息相关的伴侣。可能从编程开始,它陪伴您的时间比谁都多,因此有时选一款编辑器不亚于选房选车。对于 Atom 团队正在试验的 Xray 项目知乎上有很多有意思的讨论和观点:https://www.zhihu.com/question/268413089 可以关注并仔细思考一番。

    以小编的亲身经历来讲,选什么编辑器取决于您选择什么类型的工作什么类型的岗位,小编从早期 turbo pascal, trubo c 到 vim 后,目前 JetBrains 公司的全家桶一直是我的选择。


    今日推荐英文原文:《I learned all data structures in a week. This is what it did to my brain》

    I learned all data structures in a week. This is what it did to my brain.

    Over the last week, I studied seven commonly used data structures in great depth. The impetus for embarking on such a project was a resolution I made at the beginning of the year to train myself to be a better software engineer and write about things I learned in the process.

    In the last 3 years since I first studied about them during my undergraduate studies, I felt no glimmer of temptation to study any one of them again; it wasn’t the complex concepts which kept me away, but their lack of usage in my day to day coding. Every data structure I’ve ever used was built into the language. And I’ve forgotten how they worked under the hood.

    They were inescapable now. There are seven data structure in the series to be studied.

    Let us go back to where it all began. Like every invention has a necessity, and having data structures also had one.

    Say you’ve to find a specific book in an unorganized library. You’ve put in numerous hours in shuffling, organizing and searching for the book. The next day the librarian walks in, looks at the mess you’ve created and organizes the library in a different way.

    Problem?

    Just like you can organize books in a library in 100 different ways, you can structure data in 100 different ways. So, we need to find a way to organize our books(read: data) so that we can find our book(read: data) in a very efficient and fastest way.

    Solution :

    Luckily for us, some uber smart people have built great structures that have stood the test of time and help us solve our problem. All we need to know how they work and use them. We call them data structures. Accessing, inserting, deleting, finding, and sorting the data are some of the well-known operations that one can perform using data structures.

    The first entry in the series ‘Array’ leaves no need to have multiple data structures. And yet there will be so many more. I do not have the energy to describe why one data structure triumph over another one. But, I’ll be honest with you: it does matter knowing multiple data structures.

    Still not convinced?

    Let’s try to solve few operations with our beloved array. You want to find something in an array, just check every slot. Want to insert anything in middle? You can move every element to make room.

    Easy-peasy, right?

    The thing is “all of these are slow”. We want to find/sort/insert data efficiently and in the fastest possible way. An algorithm may want to perform these operations a million of times. If you can’t do them efficiently, many other algorithms are inefficient. As it turns out, you can do lots of things faster if you arrange the data differently.

    You may think, “Ah, but what if they ask me trivia questions about which data structure is most important or rank them”

    At which point I must answer: At any rate, should that happen, just offer them this — the ranking of the data structures will be at least partially tied to problem context. And never ever forget to analyze time and space performance of the operations.

    But if you want a ranking of learning different data structures, below is the list from most tolerable to “oh dear god”

    • Array
    • Stacks
    • Queues
    • Linked List
    • Hash Tables
    • Trees
    • Graphs

    You will need to keep the graph and trees somewhere near the end, for, I must confess: it is huge and deals with zillions of concepts and different algorithms.

    Maps or arrays are easy. You’ll have a difficult time finding a real-world application that doesn’t use them. They are ubiquitous.

    As I worked my way through other structures, I realized one does not simply eat the chips from the Pringles tube, you pop them. The last chip to go in the tube is the first one to go in my stomach(LIFO). The pearl necklace you gifted your Valentine is nothing but a circular linked list with each pearl containing a bit of data. You just follow the string to the next pearl of data, and eventually, you end up at the beginning again.

    Our Brain somehow makes the leap from being the most important organ to one of the world’s best example of a linked list. Consider the thinking process when you placed your car keys somewhere and couldn’t remember.Our brain follows association and tries to link one memory with another and so on and we finally recall the lost memory.

    We are connected on Medium. Thank you, Graphs. When a data structure called trees goes against nature’s tradition of having roots at the bottom, we accept it handily. Such is the magic of data structures. There is something ineffable about them — perhaps all our software are destined for greatness. We just haven’t picked the right data structure.

    Here, in the midst of theoretical concepts is one of the most nuanced and beautiful real-time examples of the stacks and queues data structure I’ve seen in real life.

    Browser back/forward button and browsing history

    As we navigate from one web page to another, those pages are placed on a stack. The current page that we are viewing is on the top and the first page we looked at is at the base. If we click on the Back button, we begin to move in reverse order through the pages. A queue is used for Browsing history. New pages are added to history. Old pages removed such as in 30 days

    Now pause for a moment and imagine how many times we, as both a user and developer, use stacks and queues. That is amazing, right?

    But, my happiness was short-lived. As I progressed with the series, I realized we have a new data structure based on a doubly-linked list that handles browser back and forward functionality more efficiently in O(1) time.

    That is the problem with the data structures. I am touched and impressed by a use case, and then everyone starts talking about why one should be preferred over other based on time complexities and I feel my brain cells atrophying.

    In the end, I am left not knowing what to do. I can’t look at the things the same way ever again. Maps are graphs. Trees look upside down. I pushed my article in Codeburst’s queue to be published. I wish they introduced something like prime/priority writers, which might help me jump the queue. These data structures look absolutely asinine yet I cannot stop talking/thinking about them. Please help.


    每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,欢迎关注开源日报。交流QQ群:202790710;电报群 https://t.me/OpeningSourceOrg

←上一页
1 … 256 257 258 259 260 … 262
下一页→

Proudly powered by WordPress