• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 开源日报第412期:《print 加强版 icecream》

    1 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《print 加强版 icecream》
    今日推荐英文原文:《Should We Automate the Planning System?》

    今日推荐开源项目:《print 加强版 icecream》传送门:GitHub链接
    推荐理由:print(a) 这样的语句在有的时候并不够明确,所以才有了 print(“a”,a)。而这个项目则可以简化这个步骤——它会自动打印出变量的值。如果你不添加任何参数,它则会打印出现在的位置,所以你也不再需要 print(1) 了。这个项目并不只是用于 Python 中,也有其他语言的版本,在不想使用调试器的时候能帮上不少忙。
    今日推荐英文原文:《Should We Automate the Planning System?》作者:Lyndal Mackerras
    原文链接:https://medium.com/table-top/should-we-automate-the-planning-system-d4a61f9f9c1
    推荐理由:自动规划更加透明,但是并不见得就是完美的解决方案

    Should We Automate the Planning System?

    Automatic planning could provide new opportunities to help us build better cities, but introduces thorny issues of bias and accountability.

    These days it’s very popular to say that the planning system is ‘broken.’ Everyone is fed up with it, from the communities violently displaced by dodgy “regeneration” schemes, to the overstretched planners who spend all day answering phone calls about mundane application queries, to the developers who spend thousands of hours and hundreds of thousands of pounds arguing over every tiny detail of their plans. The system is incredibly complex, often contradictory, and always critically under-resourced.

    The English planning system is based on relentless negotiation and renegotiation at every stage of a project. This creates uncertainty for developers, heightening risks in the already risky business of building speculatively. Their response is usually to throw as much money and resources at the problem as possible, sometimes even paying the salary of a planning officer to ensure their application is expedited. Since local authorities typically have a fraction of the resources, skills and funding of the developers they are supposed to be regulating, developers often get what they want – at the expense of everyone else.

    This is one of the reasons people get so frustrated with planning: often it feels like no matter what the local community says or does, all of the decisions have been predetermined behind closed doors. One of the most popular proposed ‘solutions’ to these issues is to automate the planning system. By ‘updating’ the planning system with increased data-collection and surveillance of cities, proponents believe local plans will become more astute, and planning decisions will be made in a more objective and transparent way.

    This automated system is likely to manifest in two ways; the first is the introduction of algorithmically-generated decisions on planning applications. Some local authorities have already begun experiments to automatically screen householder development applications for compliance to planning regulations, before being assessed by a planning officer. Others, such as Milton Keynes, are hoping to entirely automate decisions on permitted development applications by the end of the year. While this will save planners a lot of time and effort on mundane, routine tasks, it remains to be seen whether more complex, large-scale planning applications will also be assessed by algorithms in the future.

    A second way to automate the planning system is to create complex integrated digital models of the built environment, in what is becoming known as City Information Modelling (CIM). Similarly to Building Information Modelling (BIM), where a single model seeks to encompass all of the information required for a construction project, CIM is an attempt to quantify all of the qualities of a city and represent them in a digital model. This model could then use ‘machine learning’ to decide where new development should go, distribute public resources, and assess planning applications. In order for the CIM to reflect reality and accurately predict future behaviour, data must be continuously gathered from the city and fed to the simulation. This requires comprehensive surveillance of as many aspects of the city (and its citizens) as possible.


    Putting aside the obvious privacy issues inherent in this proposal, CIM has the potential to improve the planning system in lots of different ways. Firstly, supporters argue that the only way the built environment can keep pace with the increasing rate of change in our society is by continuously assessing data and updating plans in response. This reactionary plan-making might address some of the issues caused by ‘plan lag’, where start-ups disrupt familiar patterns in unpredictable ways at a rate faster than local plans can moderate.

    Secondly, as the algorithm will need quantifiable metrics to be able to weigh the merits of a proposal, CIM might lead to more regulation of building standards. This could help reduce some of the uncertainty and negotiability of the current system, leading to better outcomes for both communities and developers. Finally, if everyone was working from the same model — planners, developers, and communities alike — some of the current issues with transparency and back-door dealing might be solved. CIM has the potential to act as a great communication tool: communities would be able to see development proposals in their 3D context, and clash-detection software could visualise where planning applications deviate from agreed standards. Perhaps then knee-jerk NIMBY-ism might be tempered by a more informed discourse about the predicted impacts of different proposals.

    On the other hand, introducing AI decision-making programs to public institutions also introduces a whole host of new ethical issues. One particularly concerning problem is that neural networks tend to operate as “black boxes.” It’s very difficult to follow the chain of logic they have used to make decisions, which means that explaining why certain decisions have been made becomes nearly impossible. One group looking at this problem of ‘explainability’ is the AI Now Institute, who have recently released an Algorithmic Accountability Policy Toolkit to help public bodies understand the potential issues with using algorithms in decision-making. They argue that spurious correlations can cause algorithms to make odd decisions, noting that, for example, “a model that explains that it denied someone a loan because they were born on a Tuesday is not very useful.”

    In the case of automatic planning, this may mean that the CIM mysteriously decides to approve a 50-storey tower next to your living room window because each day an average of 17 dog-walkers pass by on their way to the park. Needless to say, this is not the path to democratic accountability and harmonious community consultation. There is also a growing concern that algorithmically-generated decisions tend to amplify biases inherent in datasets, a point made terrifyingly clear in Virginia Eubanks’ book Automating Inequality. For planning, this could result in aggressive social cleansing, as the algorithm seeks to rectify “underperforming” areas of the city.

    Automating the planning system is therefore a complicated task. The current government’s war on local authorities means that planning departments have already been stripped back to their bare bones, and automating the entire system may sound like a tempting option. Though there is a lot of potential to improve the efficiency and legibility of planning decisions, automation also raises difficult questions about how decision-making should be carried out in the 21st century. As a society, we need to decide how much license to give to AI decision-making programs, and how they can be held accountable for their recommendations. Perhaps, then, the key to this project is not building a perfect CIM, but designing new methods of communication and avenues of redress between people and algorithms.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第411期:《暗中编码 codeinthedark.github.io》

    30 4 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《暗中编码 codeinthedark.github.io》
    今日推荐英文原文:《Microservices Overview》

    今日推荐开源项目:《暗中编码 codeinthedark.github.io》传送门:GitHub链接
    推荐理由:这次要介绍的是一个很有意思的活动——Code in the Dark。用简单的话来描述的就是,参赛者要在 15 分钟内使用给定的资源和自己的本事做出一个网页……的截图,只需要让所有人知道它长啥样就好,然后让观众决定结果。但是这个比赛的最大限制就是你不能使用预览,只能大概脑补出这玩意大概什么样,所以要想做好必须基本功要扎实。有兴趣的话可以作为现在的娱乐活动来玩一玩。
    今日推荐英文原文:《Microservices Overview》作者:Josep Bernabé
    原文链接:https://medium.com/@jbgisbert/microservices-overview-30e505316a8
    推荐理由:微服务的介绍

    Microservices Overview

    The Origin of Microservices

    One of the first scenarios we at Kumori thought about when designing our platform was how to survive an eventual success. Creating an SLA-driven platform to automatically deploy, configure and run a bunch of small services and applications can be complex but what if we got a humongous amount of services and applications instead, some of them huge?

    Several companies have faced similar problems in the past. I personally like Jim Gray’s interview with Amazon CTO and VP Werner Vogels in 2006 (A Conversation with Werner Vogels). Actually, that interview probably describes the first and one of the most famous microservices-based system put into production (Netflix is another great example). This system was designed in the late XX century, before the microservice buzzword even existed. At the time, Amazon was basicaly a monolithic application running on a web server connected to a database. At some point, they realized that this architecture would not scale anymore. Evolving the application was nearly impossible due to the complexity of the code and the high coupling level among its pieces, mainly because of the resources they share (like the database). It was difficult to guess who owned which part of the system and who was going to be affected by a given change.

    As a result, Amazon came up with a new design based on a radical interpretation of Service Oriented Architectures (SOA). The single monolothic application became a net of interconnected services. Each service was responsible of a very specific set of business capabilities and data. The one and only access to services was through a well defined REST-like or SOAP interface. Each service was also assigned to a single team, usually small, which was in charge of the service entire lifecycle, that is, from its definition and design to the service operation once in production. In some sense, they were using devops before the word devops became popular. The amount of services was (is) so huge that a single hit in amazon.com may call more than 100 services before having all necessary data to construct the webpage.

    Elasticity

    Microservice architectures are usually elastic systems. A system is considered elastic if it efficiently adapts to volatile environments. Users/clients and infrastructure are part of those environments. So, an environment is volatile if the workload generated by users/clients changes frecuently and, sometimes, dramatically and the same happens with the underlying infraestructure. The infraestructure changes if its topology varies (for example, machines are added or removed) or its elements crash, malfunction or underperform. The individual probability of crash, malfunction or degradation can be small but the cumulative probability can be high for big hardware topologies.

    Managing such a system efficiently usually involves:
    • Scalability: the amount of infrastructure needed should be able to grow and shrink with the amount of cashable workload, and do so at a cost lower than the generated income.
    • Quality of service: the system must behave as expected by users. This perception is usually a combination of usability, availabilty, performance and security. Usability is a very important issue but it mainly depends on the user interface design. So, from the microservices perspective, I focus on the last three: availability, performance and security.
    To accomplish efficiency in a volatile environment we must design our system carefully to avoid compromising scalability, introducing bottlenecks, provoque cascade failures or introduce security vulnerabilities. This becomes even harder if the system cannot be tested under production-like conditions, which can be due to a variety of reasons. Two frequent reasons are the cost of simulating a production environment for large systems, and the impossibility of predicting production workload patterns due to their variability. As a result, the system cannot trust its own pieces in production, specially when they are under pressure.

    The Microservices Architectural Pattern

    For me, microservices is more a buzzword, a concept or a set of high-level architectural recomendations than an architectural pattern. Like with many other buzzwords, there is not a single and common definition of what a microservices-based architecture is, and what it is good for. However, it is commonly considered a good approach to develop elastic software.

    As we have seen before, elastic systems run software on top of volatile environments, and must be prepared to work on a permanently degraded state (i.e., most of the time something is not working properly or not working at all). Some other elements usually associated with elastic systems are:
    • Pay-per-use approach. The amount of money paid by costumers depends on how many times, and how they use the elastic software. That is because you also pay-per-use your infraestructure, as explained in the following point.
    • Infrastructure as a Service (IaaS). Elastic software usually runs over infrastructure provided as a service and billed following a pay-per-use approach. That’s why systems should be elastic, to book just what you need to fulfill your Service Level Agreement (SLA) with your costumers, and stay at reasonable costs.
    • High availability. Customers expect the software to always be available for them.
    • Continuous evolution. The software is continiously being upgraded, either to add new features, to improve the existing ones or to fix bugs.
    • Information is distributed and heterogeneus instead of persisted in a single central database.
    To achieve these goals, microservices commonly promote the following precepts (Microservices: a definition of this new architectural term):
    • Components are deployed as services: software is usually split into pieces or components. In a microservice-based architecture, components are deployed as autonomous services, which can only be accessed through a well defined API (like a REST API). Each service is executed as a separated process in a separate context. This approach enforces component encapsulation, preventing dirty accesses between components, since they do not share the same memory space or even the same computer.
    • Design following a business capabilities driven architecture: each component or microservice covers one business capability or a small set of them. Each microservice should be also small and with a well defined set of responsabilities. A business capability represents a feature from the business perspective. For example, package shipping can be a business capability but data persistence cannot (Using domain analysis to model microservices).
    • Smart endoints and dump pipes. A distributed communication topology is preferred over a monolithic centralized communication mechanism like a central bus (Microservices Principles: Smart Endpoints and Dumb Pipes). Central buses and communication structures can easily become complex to manage and a potential bottleneck and/or scalability limitation.
    • A single team is responsible of a microservice during its entire lifecycle (i.e., from design to operation). Teams should also be small. Two pizza teams (8–9 individuals) are commonly considered the maximum size (“If you can’t feed a team with two large pizzas, it’s too large.” — Jeff Bezos). This you build it, you run it approach forces development teams to be in touch with their software users and maintenance pitfalls.
    • Decentralized governance: there is no wise people committee defining a reference architecture for all microservices and blessing each team designs and technical decisions. Teams can choose their own tools and technologies to develop and manage their services. There are always common tools like ticketing or CI/CD systems but microservices should have a considerable amount of flexibility to choose their own technology stack.
    • Decentralized data: each microservice manages its own data using its own format and database management systems. There is no central database accessed by everyone. If microservice A needs data managed by microservice B, A should ask B for that data using B’s well known API. A microservice only has direct access to its own database (if it has one). With this approach, a central database will never be a bottleneck and an update in a database schema will only affect a single microservice.
    • Automated management: microservices are automatically deployed, configured, updated, scaled and recovered when they crash. Human intervention is obviously allowed but the system must be able to react by itself if needed. Autonomous predictive analysis algorithms can be also included to foresee hazardous scenarios.
    • Fault-tolerance: the system must be built to tolerate the crash or malfunction of some of its microservices. That usually means redundancy by replication but not only. Each microservice must also withstand crashes or malfunctions of its dependencies. For example, if microservice A needs something from microservice B, A must keep running even if B crashes, malfunctions or underperforms to avoid cascading failures. It might be also necessary to over replicate some critical microservices to avoid chain reactions due to pressure increase on the suriving instances when one of the replicas fails (Release It! Second Edition).
    These precepts have the following advantages:
    • Divide and conquer: microservices approach divide huge problems into small pieces called microservices. Each microservice is managed and mantained by its own team and can be developed using standard well known development tools.
    • Improved encapsulation: microservices approach enforces encapsulation, which facilitates setting-up fault tolerance and security countermeasures.
    • Fine grained monitoring and scaling: since each microservice replica is executed in its own process, each process can be monitored separately, providing a better overview of our system behaviour and fine-grained replication policies can be applied. Replication of the entire system is not needed anymore when a single component is overloaded.
    • Weaker dependencies on a specific technological stack: since microservices do not share the same technology, one service stack can be changed without affecting the others as long as the API remains unchanged.
    But also have some disadvantages:
    • Complex global design and topology: each microservice can be simple but the overall system composed by hundreds of microservices is complex to deploy, coordinate, manage and test.
    • Complex data integrity management: data integrity in classic monolithic systems can be enforced by the underlying database management system. With microservices architectures, the data is spread among the microservices. Atomic operations involving data from several microservices can result in integrity violations if not managed carefully. Dealing with distributed transactions can be challenging and jeopardize the entire system scalability. For this reason, such operations are strongly discouraged unless they are strictly necessary.
    • Network congestion and increased latency: calls between layers in monolithic applications are performed inside the same process. Calls between microservices are performed between processes and even between machines. This increases the latency and might cause network congestion in case of chatty communications. So, fewer messages with bigger payloads should be preferred over too many small messages.
    • API Coupling: microservices might decouple components code but increase coupling between APIs if they are not designed carefully. So, techniques like be liberal in what you accept and conservative in what you send (Enterprise Integration Using REST: Use versioning only as a last resort) are strongly encouraged to avoid unnecessary headaches when microservices APIs change.

    Conclusions

    Microservices were born to face the complexities of managing elastic systems. These systems must eficiently and effectively service users 24/7 in environments with complex and dynamic workload patterns and prone to system degradation due to frequent updates, failures and malfunctions. It is what we call a volatile environment. Resource consumption must also be efficient, specially if the system is hosted in an IaaS.

    The underlying idea of microservices-based architectures is to design the elastic system as a topology of microservices. Each microservice runs on its own process and can be hosted on a different machine. There can be hundreds of microservices in a single system. A microservice is small and responsible of a single business capability or a small subset of them, including the related data. A single team is responsible of a microservice entire lifecycle. Each microservice can evolved independently and has its own technological stack but API coupling must be taken into account.

    Fine-grained scaling, failure and security management policies can be applied with this architectures. However, the entire system management and orchestation/choreography becomes more complex and automation mechanisms become mandatory. Special attention should be paid when designing the system internal communications protocols to avoid network congestion and overcome the increased latencies.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第410期:《speed up! tool》

    29 4 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《speed up! tool》
    今日推荐英文原文:《How to Write Good Code Under Pressure.》

    今日推荐开源项目:《speed up! tool》传送门:GitHub链接
    推荐理由:对于提高工作效率有帮助的工具集合。使用适合自己的工具能够起到事半功倍的效果,这点是毋容置疑的。这个项目介绍了好用的 Markdown 编辑器,Chrome 的扩展组件等等。如果你刚好在使用 Mac,这个项目还另外还为使用 Mac 的朋友提供了不少能够提升 Mac 使用体验的工具,比如 Alfred —— 最起码这可以让你在查单词这一点上的效率大大提高。
    今日推荐英文原文:《How to Write Good Code Under Pressure.》作者:Ravi Shankar Rajan
    原文链接:https://medium.com/swlh/how-to-write-good-code-under-pressure-5f795ec7f6ec
    推荐理由:在面对压力——催魂索命的 ddl 时正确的做法

    How to Write Good Code Under Pressure.

    In many ways writing code is almost like performing surgery.

    The surgeon is trying to save your life but he is operating under a deadline, a deadline which is non-negotiable. He is under intense pressure.

    How would you want the doctor to behave? Do you want him to appear calm and collected? Do you want him issuing clear and precise orders to his support staff? Do you want him following his training and adhering to his disciplines?

    Or do you want him sweating and swearing? Would you like if he breaks down under pressure and does irrational activities? Would you trust your life with such a doctor?

    A good programmer simply does not become good because he writes awesome code. He becomes good because he stays calm and decisive in pressure situations. As the pressure grows he adheres to his training and disciplines, knowing that they are the best way to meet the deadlines and commitments that are pressing on him.

    In short, he does and keep on doing the “right” thing that ought to be done irrespective of the deadlines looming over him.

    And this is not an easy task to do, day in and day out. The challenge is to stay cool enough to handle the pressure at the moment so that you can succeed in the future.

    And here are some of the ways in which Good programmers handle pressure situations.

    Don’t Honor Commitments Not Made by YOU

    As a programmer, you always have two choices — your commitment versus your fear.

    Your commitment is the date and deadline given by you to complete your work. And when you do that, the most important thing you have with you is your word, your trust. You have to make it happen, come what may. That is where you earn your respect.

    You experience fear is when someone else makes a commitment on your behalf.

    Yes, that happens. Sometimes commitments are made for us. Sometimes we find that our business has made promises to the customers without consulting us. However, we are not honor bound to accept the commitments.

    The difference between commitments is important.

    Professionals will always help the business find a way to achieve their goals. But professionals do not necessarily accept commitments made for them by the business. In the end, if we can find no way to meet the promises made by the business, then the people who made the promises must accept the responsibility.

    This is not easy. Pressure gets on everybody when commitments are not met. But at least if you have behaved professionally you can hold your head high and stick to your stand.

    And if nobody is willing to understand your point of view, it is time to quit your job.

    Don’t Cut Corners.

    There is nothing called “Quick and Dirty code”. Dirty code is bad code. Period. Never cut corners or accept anything that is second rate.
    Your real test as a good programmer comes under a crisis. If your behavior changes during a crisis, then you are not a good programmer. For example, if you follow the discipline of Test Driven Development in non-crisis times but abandon it during a crisis, then you don’t really trust that TDD is helpful.

    If you keep your code clean during normal times but make messes in a crisis, then you don’t really believe that messes slow you down.

    Never neglect the little things. Never skimp on that extra effort, that additional few minutes, that delivery of the very best that you can do. It does not matter what others think, it is of prime importance, however, what you think about you. Always remember cutting corners will haunt you, if not now later.

    And professional programmers never take the easy route and create messy codes to move quickly. They do the best work they can and deliver the cleanest output possible, come what may.

    Communicate, Communicate and Communicate.

    Communication. It’s about honesty. It’s about treating people in the organization as deserving to know the facts. You don’t try to give them half the story. You don’t try to hide the story. You treat them as…as true equals, and you communicate and you communicate and communicate.

    If you have important information to share with your boss, colleagues, vendors — even if it’s not great news — don’t wait. If you put off providing them with actionable information until it’s too late to act, then your news will never be well received, whether it’s good or bad.

    In almost every conceivable scenario, it’s to your advantage to communicate as quickly as possible, allowing everyone involved to understand and digest the information, formulate an appropriate reaction, and respond accordingly. If it is bad news, your early warning just might allow for sufficient planning to minimize the damage.

    Above all, remain professional, polite, direct, and clear — all traits that will move your communication in the right direction during your time at your current place of work.

    And Lastly, Get Help When the Going Gets Tough

    Pair! When the heat is on, find an associate who is willing to pair program with you. You will get done faster, with fewer defects. Your pair partner will help you hold on to your disciplines and keep you from panicking.

    The quick development time of the pair often comes from an increased focus. Pairing literally is having another person looking over your shoulder all time. People actively participating in pairing are less likely to be distracted by interruptions (e.g. checking email, mind wandering to unrelated tasks, or blankly staring at the screen for minutes on end).

    Even related secondary tasks can be blocked out. The person on the keyboard can focus on just banging out the code, while the partner can worry out readability, testability, robustness, user upgrade migration, and other “big picture” issues related to the new code. Working with a pair provides positive pressure to stay on task.

    The adjustment period from solo programming to collaborative programming is like eating a hot pepper. The first time you try it, you may not like it because you are not used to it. However, the more you eat it, the more you like it.

    And most important of all it creates a culture of collaboration and gratitude. Next time when you see someone else who’s under pressure, offer to pair with them. Help them out of the hole they are in.

    As Booker T. Washington has rightly said.
    “If you want to lift yourself up, lift up someone else.”

    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第409期:《将来可能会变成传统艺能 beijing_house_knowledge》

    28 4 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《将来可能会变成传统艺能 beijing_house_knowledge》
    今日推荐英文原文:《Don’t Be That Guy, Write Better Functions》

    今日推荐开源项目:《将来可能会变成传统艺能 beijing_house_knowledge》传送门:GitHub链接
    推荐理由:如果去年就开始在工场看日报的朋友应该记得之前那个杭州买房指南,这个项目就是一个北京的买房指南——所以才说 GitHub 上什么都有,你甚至能看到买房指南。不过不管怎么说,能够尽早解决住房问题可不是一件坏事,谁知道以后的房价会不会突然升高呢?
    杭州买房指南:https://github.com/houshanren/hangzhou_house_knowledge
    今日推荐英文原文:《Don’t Be That Guy, Write Better Functions》作者:Luke Mwila
    原文链接:https://hackernoon.com/dont-be-that-guy-write-better-functions-f5423aa01c1f
    推荐理由:如何写好一个函数

    Don’t Be That Guy, Write Better Functions

    One morning, as I was warming my breakfast in the office kitchen, a colleague of mine walked in and we started engaging in some small talk. I’m going to call this colleague Freddie. Freddie had been at the company for a few of weeks, so naturally, I asked him how things had been going. What he went on to say has stuck with me since. He started with a sigh and spoke about how he had been having trouble understanding the codebase he had inherited on the particular project he was working on.

    Freddie then spent a good deal of time telling me about how had become irritated and weary from staring at a behemoth of a function that made no sense. I asked him if he tried checking with one of his teammates who had been working on the software before him, to which he responded with a slight chuckle and said the following, “(Teammate’s name) had no clue either. He stared at it as intently as I did and simply said that he didn’t write it.”

    I told myself two things after my conversation with Freddie. The first one was, “Don’t be that guy!”. That is, don’t be like the person who made Freddie suffer. The second was, “Write better functions”. That’s the only way to not be like that guy. I’m sure there are countless Freddie’s out there who have to inherit, make sense of and refactor badly written functions on software projects. Well crafted software takes concern for the small units (or methods) such as functions at the micro level and not just the overall functionality at the macro level.

    I’ve written a lot bad functions in my short coding time, and so I became deliberate about improving in this particular area. Below are some guidelines and approaches that I’ve learned (and am still learning) to apply from experienced professionals, colleagues, and other recommended sources.

    Understanding Functions

    Defining things is always a good place to start. Functions are programmed procedures. If you’re looking for something more verbose than that, you’ll have to go to Google. Software systems will comprise of functions to varying degrees. It may be that you’re developing software with an Object Oriented design in which the functions will live inside the classes that make up the system, and those functions act on the state of the classes in which they live. Or maybe your system is a Function oriented design, in which the system is decomposed into a set of interacting functions acting on centralised state. Regardless of the approach, functions exist because we need to decompose our solution concept, and at a very low level of this decomposition, we find these small units that serve a specific purpose.

    Functions Should Be Small

    Keeping things small makes functions easier to read, understand, test and debug. I’m not going to give you a magic number. Some experts would say not more than 15 lines, others would say not more than 25. It’s probably something you’ll have to decide within your team. The important thing is to remember the reasons for the principle of keeping functions small.

    Readability: A function will typically have a signature and a block code which is executed when the function is called or invoked. Having fewer lines of code in the function’s block helps to easily read and get the gist of what the function is supposed to do.

    Understandability: Smaller functions help reduce the likelihood of deviating from the main purpose of a function. The more linear the concept or purpose of the function is, the more comprehensible it will be.

    Testability: Short methods have fewer variations which means they are easier to test

    Here’s an example of a function that is meant to check the validity of a bearer token:
    const isTokenValid = (token: string | null): boolean => {
      if (!token) {
        return false;
      }
      try {
        const decodedJwt: any = decode(token);
        return decodedJwt.exp >= Date.now() / 1000;
      } catch (e) {
        return false;
      }
    }
    

    Functions Should Be Clean

    It probably doesn’t get more ambiguous than that. However, this isn’t so much about code styles, indentations or variable name lengths. It’s about understandability. Would Freddie be able to look at your function, figure out it’s intent and be able to make modifications with losing a days worth of work?

    The boils down to the measure of how maintainable your code is, and maintainable code forms a great deal of the backbone of maintainable software. I understand that are other attributes that would be used to define clean code that are subjective and that’s something you and your team can decide on.

    Functions Should Be Simple

    Something my Tech Lead would often say to me is, “If it (the function) requires a lot of effort, you stop and rethink your solution”. In our field, effort shouldn’t always be applauded, because more often than not, effort produces something complex.
    “In software development, effort doesn’t grow linearly with complexity — it grows exponentially. Therefore, it is easier to manage two sets of four scenarios each than one with six.” — Abraham Marín-Pérez
    If we can write functions based on a modularised solution, and reduce the paths of execution that the function has, it will be a lot easier to make sense of what they should be doing. When code isn’t simply written, it’s a lot harder to make sense of and these kinds of misunderstandings often lead to bugs.

    Here’s an example of a function that checks if a received argument is an array of strings:
    const checkIfArrayOfStrings = (arrayToCheck: any): Array<string> => {
      if (arrayToCheck && arrayToCheck instanceof Array && arrayToCheck.length) {
        const arrayOfNonStringValues = arrayToCheck.filter((value: any) => {
          return typeof value !== 'string';
        });
    
        if (arrayOfNonStringValues && arrayOfNonStringValues.length) {
          return [];
        }
        return arrayToCheck;
      }
      return [];
    };
    

    Functions Should Have One Job (No Side Effects)

    Robert Martin put it best in Clean Code, “Your function promises to do one thing…”, and therefore it should. Having side effects only makes our code less readable because of the variations in the code block that don’t serve that one specific purpose. Our functions should be based on a deterministic algorithm, given a certain input, it always returns the same output.

    Take the following example, the function is meant to receive a particular date and return the week that the date occurs in the form of an array with date objects.
    const getDaysOfWeekFromGivenDate = (
      date: Date | null
    ) => {
      if (date) {
        const startOfWeek = moment(date).startOf('isoWeek');
        const weekArray = moment.weekdays();
        const daysOfWeekInSelectedDate = daysOfWeek.map((d, i) => {
          return startOfWeek
            .clone()
            .add(i, 'd')
            .toDate();
        });
    
        return daysOfWeekInSelectedDate;
      } else {
        return [];
      }
    };
    
    It could be argued that the function generally has a single purpose. However, you may have noticed that there’s a point at which we are generating the week based on two arguments, an object (Moment object in this case) and the days of the week (i.e. Sunday, Monday, Tuesday, etc.). So we can actually create a new function from this one to simply things and make our methods more linear in their purpose.

    When we split our function into two, we have the following:

    As a result, it is now easier for a random programmer to grasp the intent of our functions, make test cases for them and modify if necessary.
    const getDaysOfWeekFromGivenDate = (
      date: Date | null
    ) => {
      if (date) {
        const startOfWeek = moment(date).startOf('isoWeek');
        const weekArray = moment.weekdays();
        const daysOfWeekInSelectedDate = generateWeek(
          startOfWeek,
          weekArray
        );
        return daysOfWeekInSelectedDate;
      } else {
        return [];
      }
    };
    
    const generateWeek = (
      startOfWeek: moment.Moment,
      daysOfWeek: Array<string>
    ): Array<Date> => {
      if (startOfWeek && daysOfWeek.length) {
        return daysOfWeek.map((d, i) => {
          return startOfWeek
            .clone()
            .add(i, 'd')
            .toDate();
        });
      }
      return [];
    };
    
    That being said…

    These guidelines are not the only ones to be followed, but they certainly lay a good enough foundation in helping us produce high quality code when we write our functions. Furthermore, writing good functions will take practice, deliberate refactoring, and another set of eyes (peer reviews). It might seem like extra work to produce this kind of code, but the returns are well with it. Edsger Dijkstra, a programming godfather, said the following,
    “In programming, elegance is not a dispensable luxury but a quality that decides between success and failure.”
    Don’t be that guy, write better functions.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
←上一页
1 … 156 157 158 159 160 … 262
下一页→

Proudly powered by WordPress