• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 开源日报第464期:《另辟蹊径 CJSS》

    22 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《另辟蹊径 CJSS》
    今日推荐英文原文:《The Future of AI & Self-Driving Cars》

    今日推荐开源项目:《另辟蹊径 CJSS》传送门:GitHub链接
    推荐理由:HTML+CSS+JS 能做很多事情这个是人尽皆知的。但是如果把 HTML 去掉,CSS+JS 能做什么呢?这个项目展示了新的可能性:用 CSS+JS 来同时接替 HTML 的显示内容的任务。这个项目让你可以将 HTML 本该有的内容写在 CSS 中,然后依然能像 HTML 一样工作,不过这比直接使用 HTML 麻烦得多,所以别把它真的用在 HTML 文件出了问题的场合——花时间去解决那个问题比想办法捣鼓这个项目来的更有价值。
    今日推荐英文原文:《The Future of AI & Self-Driving Cars》作者:Devin Morrissey
    原文链接:https://botpublication.com/the-future-of-ai-self-driving-cars-d02ebc4bda5b
    推荐理由:兴许以后人们依然不会信任自动驾驶系统,但是 AI 能在驾驶方面帮上的忙并不只是这个

    The Future of AI & Self-Driving Cars

    If there’s one area of the tech industry that seems to be unstable these days, it’s self-driving cars. The world of autonomous vehicles seems to be stuck in a self-defeating loop of progress and setbacks that begs the question: When will self-driving cars finally become a reality in our everyday lives?

    Of course, genuine self-driving automobiles have been in operation for quite a while at this point. In 2014 Google announced that it already had developed self-driving cars that had collectively driven 300,000 miles without a single accident.

    But, for every triumphant announcement like, a news story breaks about an accident or other setback involving an autonomous vehicle, and the question comes into play of whose fault the accident was. Naturally, incidents like these can create shake-ups and major delays.

    With all of that said, the question still stands. When will we share the roads with fleets of autonomous vehicles flawlessly shuttling their passengers from one destination to another? Let’s break down the situation and find some answers.

    AI in the Self-Driving Sector

    A key component to a truly self-driving car is the development of a truly capable artificial intelligence system.

    AI is advancing by leaps and bounds every year. From robotic lawyers to AI-powered musical composition, a rich and complex history of traditional crafts and careers are constantly being challenged by the arrival of artificial intelligence on the scene.

    With so much in motion, then, where do self-driving cars stand in the AI “arms race” that is propelling nearly every sector of business and tech-forward at increasingly mind-numbing speeds?

    Unfortunately, the answer to the question is as confusing as asking if a human teenager is truly ever “ready to drive.” Even if they pass the drivers test with flying colors, their inexperience still remains an unavoidable handicap until they’ve had a chance to iron out their flaws.

    In the same vein, answers as to whether AI is ready to take to the roads en masse via self-driven cars seem to be mixed. On the one hand, AI has been influencing the business world for quite some time now. In the area of autonomous cars, it has had an excellent driving track record. However, the impact of artificial intelligence in any sector has historically trended towards a more gradual change than a sudden one.

    In other words, AI isn’t a switch you simply turn on to automate something. The integration is typically much more subtle. For instance, as machine-learning algorithms have grown increasingly more refined over time, they’ve been able to take on more and more things, like braking and lane detection.

    In cities, AI has managed to have a greater impact in areas like public transportation. For instance, it has been successfully integrated into helping to run China’s Shanghai Maglev bullet train.

    But, for all of these significant accomplishments, artificial intelligence still hasn’t fully been given the wheel, and judging when it will truly be ready to drive on its own remains a rather elusive subject.

    How Bots Will Factor Into the Equation

    While an exact timeline for the commercialization of fully autonomous vehicles refuses to present itself, there’s no doubt that bots are shaping up to play a major part in the final product, whenever it does arrive. Projecting how that impact will look creates some interesting scenarios.

    For instance, it’s easy to see how bots can help in areas like healthcare or business. The former has seen huge leaps in areas like telehealth, where patients often interact with chatbots when inquiring about care or finding out the status of requests and results.

    In a similar manner, the integration of bots into the business sector in general has completely redefined traditional things like supply chains and inventory management. From tracking products to processing orders and providing various levels of customer service and interaction, bots are playing a heftier role in day-to-day operations across the board every passing year. In fact, supply chain automation is one of the biggest modern trends in inventory management.

    In an area like self-driving cars, where there is little to no precedent or data to work from (at least on a larger scale), the actual impact of bots is naturally a bit more speculative. However, there’s no doubt that they’ll have an important part to play once the rubber finally hits the road in earnest.

    For example, an obvious way that chatbots, in particular, will find a home in the self-driving world is through communicating with passengers in order to pick them up, transport them, and then drop them off in appropriate locations. In fact, that is exactly what Uber’s ex-CEO Travis Kalanick had in mind when he shared about how his company’s autonomous future could involve chatbot technology.

    Another area that consumers and industry leads alike will be curious to see unfold is how bots become involved in monitoring the plethora of systems that will be present in each vehicle. While they’re new, repairs and other driving issues will likely be few and far between, but long-standing driving wisdom dictates that getting used cars rather than new ones can be economically savvy.

    With the likelihood that consumers will look for used self-driving cars at least as often as new ones, another concern that naturally arises is how older self-driving cars will hold up over time. Will bots be able to play a part in informing a vehicle’s owners about maintenance and other operational failures before they become dangerous? Only time will tell.

    Bots Playing Their Part

    Whatever the details, there’s no doubt that chatbots and bot tech, in general, will continue to find a home in the self-driving sector. Whether it comes in the form of diagnosing problems, interacting with passengers, or anything else, having bots available will certainly help facilitate the process of bringing autonomous vehicles to the road for commercial use as soon as is safely possible.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第463期:《选择题 javascript-questions》

    21 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《选择题 javascript-questions》
    今日推荐英文原文:《Programmer Protocol.》

    今日推荐开源项目:《选择题 javascript-questions》传送门:GitHub链接
    推荐理由:考编程语言的时候经常会有的题目—— pia 的一段代码摆上去,然后问你输出是什么。尽管有些题目的代码扯到你可能三百年都不会把它用在真正的项目上,但是不得不承认这些非常奇怪的代码能够很好的检验你是否完全理解了这些细节。这个项目就收集了不少 JS 考试里会出的那些选择题,可以用来检验自己对一些细小的概念是不是都完全理解了,有些概念虽然看起来很少使用,但是兴许在实现某个算法的时候就得用上它们。
    今日推荐英文原文:《Programmer Protocol.》作者:Pran jan
    原文链接:https://medium.com/@pranj8030/programmer-protocol-97e27c519f68
    推荐理由:在学校小打小闹的时候可能不觉得,但是一旦上班就发现能派上用场的技能

    Programmer Protocol.

    Software engineers spend a lot of time gaining skills for interviews by practicing leet code problems and perfecting resumes.

    Once they finally get that job at a startup, Google, Amazon, or another corporation, they might find the skills they used to get the job don’t match the ones they need in their everyday work.

    Our team was inspired by the seven skills of highly effective programmers created by the TechLead. We wanted to provide our own take on the topic.

    Here are our seven skills of effective programmers.

    1. Learn How to Read Other People’s Code


    Everyone but you writes terrible code.

    That is why a great skill that has multiple benefits is being able to follow other people’s code.

    No matter how messy or poorly thought out a previous engineer’s code is, you still need to be able to wade through it. After all, it’s your job. Even when that engineer was you one year prior.

    This skill benefits you in two ways. One, being able to read other people’s code is a great chance to learn what bad design is. While you are looking through other people’s code you learn what works and what doesn’t. More importantly, you learn what type of code is easy for another engineer to follow and what code is hard to follow.

    You need to make sure you gripe as much as possible as you are reading over other people’s code. That way, other engineers understand how much of a superior engineer you are.

    Make sure you bring up points about the importance of maintainable code and good commenting. This further shows your dominance in the area of programming.

    Your code should be so well-designed that it requires no documentation. In fact, you shouldn’t document any of your code if you are a good programmer. This is just a waste of time and you need to spend your time coding and in meetings.

    Being able to read other people’s messy code also makes it easy to make updates when needed. This occasionally means updating code you lack experience in. For instance, we once followed a script from Powershell to Python to Perl. We had limited experience in Perl, but we still had enough context to figure out what was going on and make the changes needed.

    This comes from having a decent understanding of all the code as well as being able to read the Perl scripts.

    Reading other people’s code makes you valuable because you can follow even over-engineered systems that might stump others.

    2. A Sense for Bad Projects

    There are many skills that take time to learn. One of the skills we believe is worth knowing is understanding what projects are not worth doing and what projects are clearly death marches.

    Large companies always have many more projects going than will probably ever be completed or impactful. There are some projects that might not make any business sense (at least not to you), and there are others that are just poorly managed. This is not to say that you should cut off an idea right when you disagree with the project. However, if the stakeholders can’t properly explain what they will be doing with the end result, then perhaps the project is not worth doing.

    Also, some projects might be so focused on the technology instead of the solution that it might be clear from the beginning that there won’t be a lot of impact. This skill requires doing a lot of bad projects before you have an idea of what a bad project really is. So don’t spend too much time early on trying to discern each project.

    At some point in your career, you will just have a good gut sense.

    3. Avoiding Meetings


    Whether you are a software engineer or data scientist, meetings are a necessity because you need to be able to get on the same page with your project managers, end-users, and clients. However, there is also a tendency for meetings to suddenly take over your entire schedule. This is why it’s important to learn how to avoid meetings that are unneeded. Maybe a better word to use is manage rather than avoid. The goal here is to make sure you spend your time in meetings that drive decisions and help your team move forward.

    The most common method is to simply block out a two-hour block every day that is a constant meeting. Usually, most people will set up a recurring meeting at a time they find beneficial. They’ll use that as a time to catch up on their development work.

    Another way to avoid meetings so you can get work done is to show up before anyone else does. Personally, we like showing up early because in general, the office is quieter. Most people that show up early are like you, just wanting to get work done so no one bugs you.

    This is important for individual contributors because our work requires times where we focus and we don’t talk to other people. Yes, there are times you might be problem-solving where you might want to work with other people. But once you get past the blocking issues, you just need to code. It’s about getting into that zone where you are constantly holding a lot of complex ideas in your head about the work you are doing. If you are constantly stopped, it can be hard to pick up where you left off.

    4. Github


    Some CS majors started using GitHub the day they were born. They understand every command and parameter and can run circles around professionals.

    Others get their first taste of GitHub at their first job. For them, Github is a hellish landscape of confusing commands and processes. They are never 100% sure what they are doing (there’s a reason cheat sheets are popular).

    No matter what repository system your company uses, the system is both helpful if you use it correctly and a hindrance if used improperly. It doesn’t take much for a simple push or commit to turn into you spending hours trying to untangle some hodgepodge of multiple branches and forks. In addition, if you constantly forget to pull the most recent version of the repository, you will also be dealing with merge conflicts that are never fun.

    If you need to keep a Github command cheat sheet, then do it. Whatever makes your life simpler.

    5. Writing Simple Maintainable Code


    One tendency younger engineers might have is to attempt to implement everything they know into one solution. There is this desire to take your understanding of object-oriented programming, data structures, design patterns, and new technologies and use all of that in every bit of code you write. You create an unnecessary complexity because it’s so easy to be overly attached to a solution or design pattern you have used in the past.

    There is a balance with complex design concepts and simple code. Design patterns and object-oriented design are supposed to simplify code in the grand scheme of things. However, the more and more a process is abstracted, encapsulated, and black-boxed, the harder it can be to debug.

    6. Learn to Say No and Prioritize

    This goes for really any role, whether you are a financial analyst or a software engineer. But in particular, tech roles seem to have everyone needing something from them. If you are a data engineer, you will probably get asked to do more than just develop pipelines. Some teams will need data extracts, others will need dashboards, and others will need new pipelines for their data scientists.

    Now, prioritizing and saying no might really be two different skills, but they are closely intertwined. Prioritizing means that you only spend time that has high impact for the company. Whereas saying no sometimes just means avoiding work that should be handled by a different team. They do often happen in tandem for all roles.

    This can be a difficult skill to acquire as it is tempting to take on every request thrown your way. Especially if you are straight out of college. You want to avoid disappointing anyone, and you have always been provided a doable amount of work.

    In large companies, there is always an endless amount of work. The key is only taking on what can be done.

    There are a lot of skills that aren’t tested for in interviews or even always taught in colleges. Oftentimes, this is more a limitation of the environment rather than a lack of desire to expose students to problems that exist in real development environments.

    7. Operational Design Thinking

    One skill that is hard to test for in an interview and hard to replicate when you are taking courses in college is thinking through how an end-user might use your software incorrectly. We usually reference this as thinking through operational scenarios.

    However, this is just a polite way of saying you’re attempting to dummy proof code.

    For instance, since much of programming is maintenance, it often means changing code that is highly tangled with other code. Even a simple alteration requires tracing every possible reference of an object, method, and/or API. Otherwise, it can be easy to accidentally break modules you don’t realize are attached. Even if you are just changing a data type in a database.

    It also includes thinking through edge cases and thinking through an entire high-level design before going into development.

    As for more complex cases where you are developing new modules or microservices, it’s important to take your time and think through the operational scenarios of what you are building. Think about how future users might need to use your new module, how they might use it incorrectly, what parameters might be needed, and if there are different ways a future programmer might need your code.

    Simply coding and programming is only part of the problem. It’s easy to create software that works well on your computer. But there are a lot of ways deploying code can go wrong. Once in production, it’s hard to say how code will be used and what other code will be attached to your original code. Five years from now, a future programmer might get frustrated at the limitations of your code.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第462期:《比文件名更多·改 ripgrep-all》

    20 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《比文件名更多·改 ripgrep-all》
    今日推荐英文原文:《The Goods and Bads of Serverless》

    今日推荐开源项目:《比文件名更多·改 ripgrep-all》传送门:GitHub链接
    推荐理由:ripgrep 可以按照正则表达式在文件里搜索你想要的东西,ripgrep 也是如此——但是它支持更多文件类型,比如 zip 压缩文件,这会帮上不小的忙的;PDF 文件,这一点还算是常识之中;MP4 文件——实际上是找到字幕并找出这个词在什么时候出现,这就有点令人意外了;JPG 文件——利用 OCR(简而言之是一种扫描+文字识别的方法)来找出图上写了啥再进行搜索,虽然现阶段它可能不太好用而且很慢,但是将不同的技术组合在一起的确能够产生令人意想不到的事物来。
    今日推荐英文原文:《The Goods and Bads of Serverless》作者:Benjamin Tanone
    原文链接:https://medium.com/@benjamin.tanone/the-goods-and-bads-of-serverless-e7f3395a8f14
    推荐理由:无服务器的优缺点

    The Goods and Bads of Serverless

    Does it actually live up to the hype?

    For the past 6 months, I have been developing a lot of my team’s RESTful API backend using AWS Serverless Application Model (SAM). The initial decision to use a serverless architecture is based around (1) cost-saving; and (2) easy scalability. Also, the fact that SAM supported Node.js meant that front-end developers can work on backend code without having to switch languages and, to an extent, programming paradigms.

    Contrary to what serverless marketers tell you, using a serverless architecture is not all that rosy. Sure, your application’s scalability is only limited by your corporate credit card’s credit limit, but there are quite a few gotchas which makes me wonder whether or not our decision to go serverless was correct.

    Let’s start with what I have found to be less than pleasant with using a serverless architecture (since marketers have already emphasised on The Goods so much that you’d probably know them by now).

    The Bads

    Cold starts

    Oh dear, where do I start with this one?

    Everytime you ask a serverless veteran what his/her #1 gripe is with serverless, they’d probably say, “Cold start.”

    Cold starts are basically this: when a “function” has been idle for quite a while, your serverless provider (e.g. Amazon) recycles the space that your function had been taking. This means that if the function was to be used again, your serverless provider would need to prepare it for execution.

    There are a lot of articles and resources on the internet explaining the problem. For example, this article stated that Lambda functions which are running Node take about 12 milliseconds on average to initially load (not to run). This looks fine at first, but when you look at another article, this figure jumps up to roughly 7 seconds when you operate within VPCs (which you often need to because DBs are usually hidden behind VPCs).

    This was one of the reason me and my team are planning to move away from a serverless architecture. Everytime we tried navigating the front-end, it would take around 5–10 seconds for the data to initially load (afterwards it only takes around 500 ms). Note that each of our APIs are served by separate functions, so that 5–10 seconds wait would re-occur everytime you try to use a different functionality. Anyone with a hint of UX experience would tell you that this is not good UX at all.

    On the other hand, there are ways to get around cold-starts. For example, you can ping your functions periodically to ensure they’re always warm. If you’re feeling adventurous, you can also set up a single function to handle all API calls in order to ensure that cold-starts only happen once (instead of in our case where each API endpoint has its own functions, which have their own cold-starts).

    EDIT: As of June 14th, it seems AWS has started to implement a new architecture which they’ve been talking about; cold-starts now only take 2.5 seconds on average. In the [proposed] new architecture, Lambda functions can share/reuse existing ENIs (think VPC connections), so the heavy cold-start only happens once per VPC. See: https://www.nuweba.com/AWS-Lambda-in-a-VPC-will-soon-be-faster

    Your applications have to be truly stateless

    “Hah! RESTful APIs these days have to be stateless anyways! Easy peasy.”

    But wait, the systems powering RESTful APIs aren’t always fully stateless. For example, Spring Boot applications take a few seconds to start-up and reach a state where it is ready to serve a request. During that start-up period, the application may prepare a pool of connections to the application’s database layer and retrieve its configurations from remote sources so that, when requests are served, the application doesn’t have to waste time preparing itself.

    In a serverless environment, you don’t have the luxury of making sure your application is ready to serve your request; you have to design your application in such a way that it is ready to serve a request from a cold, dead state.

    Do you want to cache that particular decryption key locally? Tough luck.

    Do you want to reuse those connections? Nah, we have to get rid of them.

    These little quirks with being fully stateless may seem small at first, but they can add up really quickly to the time it takes for your application to serve a request. Is this user authenticated and authorised to access this endpoint? Wait, let me contact our identity provider and see if they’re actually legit. Alright, we can start doing something now. Oh wait, before we can get our data, we have to negotiate and establish a connection to the database.

    Remember: seconds count when serving a request, because they can quickly add up as pages make multiple API calls (e.g. authenticate/authorise, get user data, get non-user data).

    No one really makes serverless apps…

    This means that, more often than not, instead of going to Stack Overflow, you would have to find the answers to your questions in official documentations. Even then, the documentations and examples related to serverless applications are sparse, as they are maintained by a (relatively) small community.

    As of writing, there are less than 900 questions on Stack Overflow related to Serverless (SO tag: [serverless]) (AWS SAM’s older, cross-provider sibling). On the other hand, there are around 57,000 questions on Stack Overflow related to Spring Boot (SO tag: [spring-boot]).

    …which means there is no mature framework for it (yet)

    Yes, there are frameworks such as AWS SAM and Serverless. However, they are not as mature as other more well-established frameworks such as Spring (which is fair, since Spring had about a decade head-start to serverless frameworks).

    This means that we had to do a lot of manual work which would normally be taken care of in other frameworks (i.e. Spring Boot). For example, we had to map error responses ourselves, whereas in Spring Boot, unhandled errors are automatically mapped and returned as a HTTP 500 (which you can configure further if the default does not suit your need).

    In addition to this, AWS SAM does not yet have a healthy and mature ecosystem of libraries around it when compared to older frameworks such as Spring. With Spring Boot, you can literally pull in dependencies such as Spring Security, configure them a bit, and start building your business logic after tinkering with your dependencies for a few hours. Well-established best-practices are available everywhere too, as a lot of people have worked on a Spring Boot application in the past.

    It’s not really good for long-running jobs

    Despite it being touted as the future of computing, it’s still missing one thing: a lot of serverless (to be precise: Function-as-a-Service) platforms have hard execution time limits. For example, Lambda has a 15 minutes (900 seconds) execution time limit, and if you exceed that, Lambda automatically kills your function irrespective of its progress.

    Admittedly, for user-facing serverless APIs, you wouldn’t want to make the user wait 15 minutes anyways. However, when you’re dealing with datasets that are approximately in the six-figures (as our team does), you would need to re-examine whether or not that time limit hinders your ability to process all the data.

    On the other hand, several articles on the internet (such as this one) have suggested packaging your long-running jobs as Docker containers and deploy them to services such as AWS Fargate. However, this approach is arguably not fully serverless, as you’d need to deal with the environment in which your application is being run in (in this case, dealing with Docker containers and images).

    There are also suggestions on using Lambda recursively when doing long-running jobs, although I personally haven’t played with that concept yet. I am a bit wary to the dangers though, as it removes the concept of “time limits” when executing a function, which may result in you having to explain to your boss why your bill jumped up from $1 to $20 within the span of one day.

    Development environments are… Tricky

    If I had a dollar for every time my teammates ask me how to run our backend, I would probably be rich right now.

    While I don’t know if this applies to the Serverless Framework, developing on AWS SAM is a bit of a pain.

    One of my team’s major issues is that we have a lot of dependencies which are not automatically managed by SAM. For example, did you forget to run “npm install” locally? SAM will just crap out and say “Error: Failed to import module YourModule” or something along that line. Or, how about when you want to include pre-run initialisations such as initialising your DB? Well, you’ve got to (1) manually do that; or (2) create your own script/function to do it.

    This problem is made worse by the fact that serverless, by nature, means that you’d have to take advantage of external services in order to temporarily store states. However, AWS SAM only emulates a (really-dumbed-down) API Gateway + Lambda environment, which may cause you problems when you want to start using other AWS services such as SQS or SNS.

    The Goods

    Pretty Darn Good Scalability

    As I and many other serverless advocates and marketers have mentioned, using a serverless architecture means that your application becomes REALLY scalable. It’s pretty much the fundamental reason of using a serverless architecture in the first place: managing servers manually are nasty and are filled with a lot of pitfalls and gotchas (for example, what if you have a spike in traffic and you don’t have enough servers to cope with that?).

    With a serverless architecture, you pretty much say “Hey Cloud Provider, you deal with making sure I have enough resources/servers to serve my customers. I’ll just deal with the logic of my application.” This means that scalability is taken care of by your cloud provider, and you don’t need to think too much about what would normally make a Ops engineer bang their head as their boss is complaining about the company servers being down.

    That being said though, as we are still developing our project, we have yet to see how this would fare in a production environment.

    You Pay for What You Need

    Remember how you think Virtual Private Servers are amazingly much more cost-efficient than maintaining on-premise hardware since you only pay for what you need? Serverless applications take that notion a step further.

    When you have a serverless application, you literally pay for your application’s level of usage. Did you fail to tell a single soul that you launched your serverless application? No problem — you do not have to pay for anything.

    I’m not going to elaborate too much on this; a quick Google search would show you real-world examples on how going serverless can make your bill really light (or, as we have discussed before, really expensive).

    No Need for an Ops Team

    If your organisation already has a dedicated ops team in place, feel free to skip this part.

    Setting up ops for the first time for server-based applications is always a pain; not necessarily because it’s complex, but because the process requires effort. You’d have to setup scaling policies, configure load balancers, configure VPCs, configure server provisioning and so on and so forth.

    In a serverless environment, your developers don’t have to deal with those technicalities. You only deal with the data coming in and out. The most I’ve dealt with my execution environment is probably assigning a VPC to a Lambda; no need to patch and secure my “servers” and ensure that they’re running smoothly, or to setup autoscaling to ensure that my servers can handle the load.

    Conclusion

    Serverless is the future of computing — except the future is not now.

    Look, I love serverless. I think it’s a great way of saving cost, especially for a poor graduate like me. It works for a lot of use-cases. In fact, I plan on doing more personal projects on serverless.

    However, for mission-critical projects (e.g. collecting data to be sent to the government), I would be hesitant on recommending serverless (or AWS’s rendition of serverless) to my clients. This is because:
    • It’s relatively new stuff, hence it may not be the most maintainable solution. Not a lot of developers are adept in developing serverless applications; they would need to be experienced with both working on the cloud and developing using the serverless paradigm (i.e. things have to ideally be lightweight and stateless).
    • Typed languages (e.g. Java), which helps ensure data integrity and code maintainability, have long cold-starts on Lambda. Conversely, if you want to have shorter cold-starts, you’d probably have to use dynamically-typed languages such as Python or JavaScript, which are less “safe” than typed languages. This is one of the reasons why my current team is considering moving from serverless onto a server-based solution, just because it’s starting to become a pain to track what data gets passed around.
    • Very easy to make mistakes (because you can write any code you want in serverless, as long as it spits out an output).

    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第461期:《比文件名更多 ripgrep》

    19 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《比文件名更多 ripgrep》
    今日推荐英文原文:《Difference between Machine Learning, Artificial Intelligence, and Deep Learning: A Response》

    今日推荐开源项目:《比文件名更多 ripgrep》传送门:GitHub链接
    推荐理由:在一大堆文件里找出一个名字带有猫的文件或者文件夹可能并不算难,everything 或者 whereis 什么的都能做到这一点。但是要从一大堆文件里找出一个提到了猫的呢?这个项目可以对文件内进行搜索——然后告诉你你在这个文件的哪一行提到了猫,如果你刚好把这堆文件命名为”new””new(1)”这样的偷懒型命名的话,兴许它还能帮你按照文件内容来定位你想要的文件究竟在哪。不过如果说你连文件内容都忘光了的话,那就……
    今日推荐英文原文:《Difference between Machine Learning, Artificial Intelligence, and Deep Learning: A Response》作者:Hein de Haan
    原文链接:https://medium.com/datadriveninvestor/difference-between-machine-learning-artificial-intelligence-and-deep-learning-a-response-7438bb33459
    推荐理由:对于不了解这方面的人来说,这几个玩意听起来都差不多——因为听起来都很难

    Difference between Machine Learning, Artificial Intelligence, and Deep Learning: A Response

    In October 2018, Victor Basu wrote a post called “Difference between Machine Learning, Artificial Intelligence, and Deep Learning”. Although I appreciate the effort to make these concepts clear to the public, Victor’s post makes a number of claims I simply disagree with. As I think Artificial Intelligence is an important topic, I decided to write this response.

    In this post, I’ll take a few quotes from Victor’s post that I think need responding. In the process, I’ll clear up some key concepts.

    “Artificial intelligence is subdivided into machine learning and deep learning”

    Well, maybe you could say it like this, but it’s a highly confusing sentence considering what’s actually the case. First of all, Deep Learning is a form of Machine Learning. Victor talks about both fields being separate, but Machine Learning is a broad subfield within Artificial Intelligence, and Deep Learning is just a subfield within Machine Learning. To define things a little more, Machine Learning is the collection of algorithms that computers can use to perform tasks they haven’t been explicitly instructed to do. This can be done by giving examples to the computer (although there are other possibilities): for example, a computer can learn to recognize cats in images if it’s given the right algorithm and a bunch of examples of cats in images (and perhaps a bunch of images without cats to see the difference). A popular approach here is to use a Neural Network, which is basically a mathematical function of which the parameters are determined by considering a lot of data (examples). Deep Learning happens when one uses a Neural Network of a particular minimum size.

    Second of all, the field of Artificial Intelligence contains more than Machine Learning: it also, for example, contains the example of Deep Blue, the famous chess computer that defeated Garry Kasparov. Deep Blue didn’t use Machine Learning; it used a predefined search procedure to find good moves to play. Machine Learning, and in particular Deep Learning, is a hot topic in Artificial Intelligence because of its amazing results the past years, but it’s certainly not the only subfield within Artificial Intelligence. Note that Victor is not necessarily claiming it is the only subfield; his quoted sentence does, however, slightly suggest this is the case.

    “… Artificial Intelligence revolves around the concept that a model is built to perform all kinds of tasks irrespective of situations.”

    Well, that would be Artificial General Intelligence, an Artificial Intelligence that can successfully perform roughly all tasks humans can. This is a major goal within Artificial Intelligence since the field started in the 1950’s. However, Artificial Intelligence also encompasses Narrow Artificial Intelligence, designed for a specific task.

    “The artificial intelligent model means a model that could work, think and respond just like a human brain.”

    Well, that’s interesting. Again, this would be Artificial General Intelligence, not Artificial Intelligence in general (pun intended). An Artifical General Intelligence performs as well as or better than humans in (most of) their intellectual tasks, but this does not mean it thinks just like a human. Depending on the design of the intelligence, it could be a very alien intelligence, or it could indeed be very human-like. It depends on the techniques being used, and on how we define human-like thinking.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
←上一页
1 … 143 144 145 146 147 … 262
下一页→

Proudly powered by WordPress