• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 开源日报第444期:《百闻不如一见 algorithm-visualizer》

    2 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《百闻不如一见 algorithm-visualizer》
    今日推荐英文原文:《Pair Programming: The Good, The Bad, and The Ugly》

    今日推荐开源项目:《百闻不如一见 algorithm-visualizer》传送门:GitHub链接
    推荐理由:对于简单的算法,我们可以在头脑里画图;复杂点的,在纸上一步步记下来对着看;再复杂点的……而这个项目提供了一些算法的可视化——不是一次运行到底而是一步步的走下去,每一步的变化都会在图中呈现出来,对于理解算法相当有帮助。

    今日推荐英文原文:《Pair Programming: The Good, The Bad, and The Ugly》作者:Justin Travis Waith-Mair
    原文链接:https://medium.com/the-non-traditional-developer/pair-programming-the-good-the-bad-and-the-ugly-5fc3f9c3663c
    推荐理由:配对编程的优缺点,兴许你可以看看我们之前介绍配对编程的文章(传送门)(传送门)。

    Pair Programming: The Good, The Bad, and The Ugly

    When I started at my first dev job, there was only one way to program: Put your head down and write code. Every so often, I would collaborate with a team member for short bursts, but coding typically happened on my own using my own machine.

    Then I found out about a company that boasted that it did 100% “Pair Programming.” What is pair programming? Pair programming is when two people sit at one machine and code together on the same problem. It’s like the collaborations that I was talking about earlier, but instead of being the exception, it was the norm. It was a concept that intrigued me.

    I later found out that that statement wasn’t accurate, but why it wasn’t was because this company didn’t just practice pair programming, it also practiced what is called mobbing. Mobbing is just like pairing, except for instead of just two individuals, you would have 3+ people all working together on the problem. Code review, typically in the form of a pull request, did exist as well but they are typically a small minority.

    Why would a company advocate to work this way? What are the advantages it gives? As I researched it more I found articles and studies that boasted of the benefits of paired programming, but they all pretty much came down to one major reason: It results in fewer bugs that make it to production. The theory is that my worst day would at least partially be drowned out by your good and vice versa.

    The concept was intriguing and when I decided to start looking for new opportunities, I decided to apply to this company and was eventually offered a position there. I was on a team that paired almost exclusively and I loved working with my team there. After a year I was ready for a new opportunity and accepted a new role at a new company.

    Having now worked in the two environments, I thought it would be helpful to share my thoughts on what paired programming was like as a developer and if I would ever advocate for it again.

    The Good

    Even though paired programming sounds, “slow”, I felt I got a lot done when I was pairing. When pairing you tend to stay on task and even if someone needs to take a quick break here and there, the other can forge on ahead, know that the partner won’t be gone very long to check on their work. Given all that, I feel that pairing more than compensates for the being “slower” in the actual coding.

    I never felt lost in the code base when I was pairing. I had a friend that, when he was hired on in his first Senior role, felt extremely lost. He was given a very high-level rundown of the code base and then given tasks to complete on his own. When he would try to ask for help, he was told he was taking to much of their time and that as a “senior” he should be able to figure this out. Now, this might be an extreme example of what not to do, pairing would have helped my friend out as he was learning how the codebase worked.

    Pairing helped facilitates knowledge transfer. Domain knowledge, coding tricks, and so on. When two or more people are working on it then there is more knowledge about what is happening spread across the team and you don’t have to worry about people leaving and taking important information with them.

    I got to know the people on my team very well. It forced me to be social and not isolate myself. People who know me well are sometimes surprised that I have battled with shyness all my life. It’s not debilitating, but I can easily close myself off from a group. Pairing didn’t let me do this and I felt a part of the team quickly. This isn’t something that happened exclusively because of pairing, but I feel it definitely helped.

    The Bad

    Pairing isn’t all roses. There were downsides to pairing, from a developer’s point of view. One of those was the lack of “home base”. In a team, you are often changing partners and therefore you or your pair will need to trade desks to facilitate that. In this game of musical desks, you tend to pack minimal and light. This keeps you from feeling like you have a “home base” that is all your own.

    As I stated earlier, pairing tends to push you to be on task. This means you don’t often feel like you can take small detours and “play.” I never wrote code just for fun. I didn’t feel like I could go do some personal exploration around a subject just to see if it would work, even if I had no intention of doing that in production. Experimentation is a very important part of a developer’s continued education and when one feels like they can’t do it, then you feel like you are a “code monkey” coding up whatever someone tells you to do.

    Another bad thing is that socializing can be draining. Some people thrive off of social interaction, while others avoid it at all cost. All of us fit in that spectrum and can find it exhausting to always be pairing. Our team tried to “scratch that itch” by setting aside every Monday as a solo work day. This did help, but there were times I just wanted to go out on my own and felt trapped when I didn’t feel I could.

    The Ugly

    Besides the good and bad, there is also an ugly part of pairing. The first is that dominant personality, intentionally or not, push non-dominate personalities around. There is a saying, that we should have “strong opinions, loosely held.” Some personalities will end up holding their opinions “looser” than others. Yes, this can happen in any team settings, but when all your interactions are this way, it becomes draining fast.

    To illustrate this, there is a “pairing best practice” where one person is on the keyboard writing code while the other person “navigates.” The person on the keyboard is supposed to be a “smart keyboard” who codes details, while the navigator tells the smart keyboard what to do, but not how. In practice, I often felt that the person on the keyboard wasn’t a “smart” keyboard. Instead, they ended up being an inefficient keyboard, just doing whatever the other person told them to do.

    Another ugly aspect is when you are more invested in pairing than the other. It is draining enough to always be “on” when you are both invested, but when your partner obviously has checked out and is doing their own thing, it wears on you even more.

    The Verdict

    Now that you know the good, the bad, and the ugly of paired programming, what would I recommend? Yes, I would recommend it and no I wouldn’t. Pairing has its benefits, but it also has its negatives. By choosing to do it exclusively, you are embracing those negatives. It’s just like the old hammer analogy when all you have is a hammer everything looks like a nail. When all you do is pair programming, everything looks like a “pair programming nail.” Instead, I would advise a more “selective pairing process.”

    First, I would recommend that individuals have their own desk that they can make their home base. When pairing, someone comes to one person’s desk or the other and then everyone goes back to their home base.

    Also implied with that last recommendation is that pairing is not something you do all day. Maybe you pair all day one day, none the next, and then pair part of the day, the day after that. One should only pair if and when it makes sense. Maybe you should pair in the morning, separate, and then meet up at the end of the day.

    Ultimately, pairing is a good thing and it should be encouraged. Just like anything, when done in excess, the negatives can start to overshadow positives. By using it as a tool only when it makes sense, you can enjoy the benefits while mitigating the negatives.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第443期:《花里胡哨 coding-love》

    1 6 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《花里胡哨 coding-love》
    今日推荐英文原文:《Will translators still have a job in 5 years?》

    今日推荐开源项目:《花里胡哨 coding-love》传送门:GitHub链接
    推荐理由:相信学生时代的时候各位没写过情书也知道情书大概什么样的吧——不敢直球一波推过去只能曲线球花里胡哨的青年时期产物,包括但不限于外语,藏头诗等等。这次换点新花样,如果把那些情书里的句子拿代码来实现的话大概会是什么样呢?这个项目里就有一些想法,各位也可以拿自己拿手的语言来实现一下,尽管在实战中很难派上用场……
    今日推荐英文原文:《Will translators still have a job in 5 years?》作者:Michal Kessel Shitrit
    原文链接:https://medium.com/swlh/anthrolinguists-ee8698189210
    推荐理由:虽然人工智能做了相当大一部分翻译人员在做的事情,翻译人员依然有着存在的必要

    Will translators still have a job in 5 years?

    As the role of human translators continues to shrink, it is up to us to redefine our value in the industry.


    Photo by Victoria Heath on Unsplash

    When I was a kid, back in the olden days, we had a family tradition. Every once in a while, we’d all go together to the video rental store, argue a lot, pick out a film and watch it together. These video stores were truly magical places — aisles and aisles of every cartoon I could imagine (though we didn’t have imdb back then, so I couldn’t imagine much). When we finished the movie, we rewound the tape (we were well-mannered kids) and took it back in 48 hours or less. By the time I was in high school, most of the video stores shut down. Instead, we drove over to the vending machine at the edge of town, pressed a few buttons and were rewarded with a thin plastic DVD box sliding out. These days, even vending machines are getting more and more scarce. With Netflix and Hulu on every computer and smart TV, an entire profession has just — poof — disappeared.

    Sounds familiar? Being a translator, this is something I hear around me constantly. There are powerful winds of change currently flowing through the translation world, brought on by the rise of AI usage in machine translation. Prophecies of doom proclaim that translation is a dying profession, and translators a breed on the verge of extinction. While I can’t say I agree with the doomsday approach, things are definitely going to change. In fact, over the past couple of years, the entire industry is swept up in a sense of upcoming transformation: a feeling I personally find at the same time both terrifying and exhilarating.

    For many of my colleagues, these are scary times. Translators have seen a steady rise in demand for their services over the past 20 years. In 2006, the US Department of Labor predicted a 24% increase in request for translators over the following 10 years. Thanks to globalization and UX trends, more and more companies dove into more and more markets, localizing their products as a result. If you were lucky enough to have a knack for languages and the right training, you were assured a rather safe job prospect in a not-so-safe market. But that cozy status-quo started to crumble in 2015 when the first neural machine translation system went live.

    With machine translation — in one form or other — being available for decades, no one could estimate how transformative this was going to be. But NMT and the forms of AI-based machine translation that followed brought such rapid improvement in only a few years, it quickly became clear that the tables are about to be turned. AI relies on big data, and while some languages and specialties already present better results, all of them will get there sooner or later — sooner being the key word. As the role of human translators in good-old source-to-target translation continues to shrink, it is up to us to redefine our value and find ways to maintain our relevance in an evolving industry, lest we go the way of the video store clerk.

    To find out how we, as human linguists, can make a meaningful contribution in hybrid human-machine translation, we first have to consider the characteristics of AI and machine learning, and more importantly — their weaknesses.

    AI-based tools excel in speed, accuracy, and efficiency. In the near future, machine learning will make our lives faster, cheaper, more efficient and accurate. Like the most realistic science fiction ever made, every aspect of our world will be changed and improved, whether we like it or not: from food production, all the way through medicine, supply, planning, driving and even home living. And the contribution it will have to our life does not end with technical tasks. Based on the binary rules it is given- or the ones it develops using the data it is fed — AI can produce ‘creative’ content that is remarkably human-like. Apply a little lipstick and don’t let anyone get too close, and they will never know the difference. The output will be technically correct and the process generally a great deal quicker than a human is capable of. Have a look, for example, at the paragraph below, produced by Allen Institute’s GPT-2 Explorer:
    • The first major change to the translation industry was the introduction of the “translator’s manual” in the early 1990s. This manual was a major step forward in the translation industry. It was a step forward in the translation industry because it allowed the translator to make a decision about the translation of a book. The translator could then make a final judgment based on that decision.
    • The second major change was the creation of a new translation service called “translation service” (TOS). This service was created by the translation service company, Translate.com. The TOS service was a service that allowed the translator to make a decision about the translation of a book.
    Sure, GPT-2 may not be the brightest match in the box yet, and it seems to be weirdly fixated on book translation. However, considering the fact that this was written by a computer, I must admit it’s quite impressive. The content is grammatically correct and the flow makes sense — well, sort of. Actually, some of my papers in high-school weren’t so far off. And as GPT-2 continues to learn, it may even get to college soon.

    In fact, on top of translation and writing, scientists managed to use machine learning to get computers to output human-like results in many other fields. Last year, Christie’s sold an AI-generated portrait for $435,000. Tools like AIVA and Amper Music offer users custom-made AI compositions for their projects. And just a few months ago, an AI-entity named Benjamine created a film starring nonother than Baywatch’s David Hasselhoff. I know, right? If computers are smart enough to make us some Hassel-clones, humanity can finally sit back and admire a job well done.

    But these artistic endeavors lack one thing — a human touch. AI’s Achilles’ heel, or its weak spot, is the illogical things that make us — us. Faith, emotions, culture, empathy — all of those little, random parts of humanity that make no sense but somehow drive us forward. Without them, the world will feel a cold, strange and alienating place. While people may be able to appreciate the precision and perfection of an AI-written piece, they may not be able to connect with it on a deeper emotional level.

    The human touch is unquantifiable, undefinable. And since computers can’t understand it, they can’t replicate or reproduce it. To bridge that gap, many pre-AI professions will become hybrids — with computers doing the bulk of the technical work, and humans adding their own unique fingerprint to create a complete product that is larger than the sum of its parts.

    In the case of translators, it’s time to shift our focus from the technical act of translation to the translation endeavor as a whole. The skills linguists are required to have today will soon become obsolete. We will no longer need people to read content in one language and type it in another. Just like a robotic vacuum cleaner, computers will do that for us (hopefully, not getting stuck under the couch quite as often). Rather than being mere language experts, translators will have to become anthropologists of a sort — anthro-linguists, if you’d like — students of human nature and culture. Specifically, our role in the translation process will be twofold.

    First, we will act as masters of humanity: make sure the soul of the content is properly conveyed. This is something I came across recently, in a website translation job we worked on for a major client in the hospitality industry. On top of our usual MTPE duties (because let’s face it, MT is not there yet), we verified that the tone and voice match those of the brand: inviting, friendly and professional. And it’s a good thing we did! NMT and similar methods do employ a meaning-based approach, but they are prone to overlook subtle innuendos and subtext. More often than not, the meaning-based approach produced content that was correct, but also dry and impersonal. Sure, it probably would have been understood either way. But it might not have made their customers feel welcome, or delighted, and a lot of the carefully-crafted effect of the original content would have been lost. As anthro-linguists, it will be our job to adapt translations, keeping them in line with the required tone and voice.

    Secondly, we will act as cultural experts for the target market, ensuring that the result is well-understood, that it doesn’t step on any toes (unless the point is that it does) and that it maintains the same cultural spirit as the source. Humor; references to pop-culture, history, and books; or even simple visuals are all at risk of getting lost in translation, or accidentally offending our market. An outstanding example of how seemingly harmless content can have a detrimental effect on your brand’s image is the case of the German brewery Eichbaum. In honor of the 2018 World Cup, Eichbaum printed bottle caps with flags of the 32 national teams participating. But one of the countries — Saudi Arabia — found their marketing act extremely offensive. The Saudi flag contains the Islamic statement of faith, and the choice to print it on an alcoholic beverage — the consumption of alcohol being “haram”, a serious prohibition in Islam — incited a heavy backlash from Muslims around the world. Eichbaum clearly did not think their marketing choice will cause such outrage, but an expert in Saudi-Arabian culture could have probably tipped them off, have they taken the trouble to ask.

    The switching of roles in translation means that linguists interested in keeping their job in the industry will have to acquire a completely different set of skills to stay relevant. Each translation job will involve diving deep into the hidden meanings of the text, using detailed client briefs and a lot of creativity. Translators will need to have an in-depth, broad understanding of their native culture, as well as its many sub-cultures; keep themselves up to speed with any new social phenomenon; And employ various methods of research and data collection to gain insights on the markets they focus on.

    By using our own human abilities to complement the advantages of AI and machine learning, translators will remain valuable in today’s disturbed industry. Even more so, we can take an active role in the transformation, helping the translation industry become bigger, better and stronger. And we can be part of the revolution, at the same time making sure the content by which we, as a society, are surrounded preserves its sense of humor, its unique character, and most importantly — its soul.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第442期:《网格 cssgridgenerator》

    31 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《网格 cssgridgenerator》
    今日推荐英文原文:《5 Powerful Habits of Successful Developers》

    今日推荐开源项目:《网格 cssgridgenerator》传送门:GitHub链接
    推荐理由:兴许你正在开始着手使用网格的方式布置你的页面,但是一个个的为你的元素写上网格规格实在是有些重复劳动……那么这个项目就能帮助你。它能够让你简单的生成网格所需的 CSS,你只需要把你的布局画上去,然后调整配置就能得到你需要的代码了。尽管它没办法让你很全面的使用网格,只能帮你写个最开始的布局而已,剩下的事情你还是要自己动手,但是不可否认的是在最初的一步上你会节省很多时间。
    今日推荐英文原文:《5 Powerful Habits of Successful Developers》作者:Ada Cohen
    原文链接:https://medium.com/better-programming/5-powerful-habits-of-successful-developers-10fa9f5eee77?source=topic_page———23——————1
    推荐理由:一些良好的习惯,包括保持不断的学习,这会让未来的你不需要为了之前浪费的时间而穿越时空回来好好的敲你的脑袋

    5 Powerful Habits of Successful Developers

    Here are five incredible propensities that can supercharge your prosperity as a developer:

    Be Professional

    • “The correct activity and the hard activity are typically the same. What’s more, both require professionalism.”
    Professionalism resembles the sword of Damocles. On one hand, it is a symbol of respect and pride. On the other hand, it is a marker of duty and responsibility. The two are inseparable. You can’t invest heavily in something that you aren’t responsible for.

    Imagine that you thought of some code and then created it, but that you caused the generation framework to be disturbed for one day. Obviously, the client isn’t happy. You return the code, yet whatever harm happened couldn’t be fixed.

    The nonprofessional would shrug his shoulders, say “stuff happens,” and then begin composing the next module. The expert would sweat and fuss over the slippage and would ensure that the same mistake doesn’t happen again.

    Professionalism is about responsibility. You can’t be correct all the time, so you must own your mistakes.

    Try Not to Repeat the Same Error

    • “When a statement of regret is followed by an excuse, it implies a similar mix-up will happen again.”
    Obviously, we need our product to work. Surely, a large portion of us are software engineers today because we got something to work once, and we need that feeling of rapture once again. In any case, we aren’t the main ones who need the product to work. Our clients and managers need it to work as well. Without a doubt, they are paying us to make programming that works in the manner they need it to.

    But software isn’t immaculate. Every product will have bugs.

    The key here isn’t seeking to compose impeccable code. That is an idealistic dream that will never happen. The message here is taking responsibility for the blemishes in your product. Make new bugs. Commit new errors. However, don’t commit the same oversights again and again.

    As you develop in your calling, your blunder rate ought to quickly diminish toward zero. Though it won’t ever reach zero, it is your duty to get as close to it as you can.

    Don’t Leave Anything to Luck. It Never Works.

    • “The harshness of low quality lasts longer than the sweetness of gathering the timetable.”
    The standard guideline is that if something is destined to turn out badly, it will turn out badly and no measure of karma can keep it from happening.

    That is the reason testing is so significant. How can you figure out if your code functions? That is simple. Test it. Test it once more. Test it up. Test it down. Test it every one of the seven days to Sunday!

    Regardless of whether due dates are hardened and there is pressure on you to compromise, don’t. Mechanize experiments, get into pair programming mode, or even take a gander at reusing existing experiments. In any case, don’t diminish the holiness of this progression.

    Your entire notoriety relies upon how well you have tried the code before conveying underway. Each and every bit of code you compose ought to be tried. Enough said.

    Consider the possibility that the code is “untestable.” The code is written so that makes it hard to test.

    The short answer is making the code simple to test. Furthermore, the most ideal approach is composing your tests before you compose the code that passes them.

    Remember, the motivation behind your code is to get it to solve a business issue. On the off chance that goal flops, no number of lines of code or code beautification are of any use.

    You as a software engineer should know whether your code works. Nothing else is more important.

    Continuously Create Flexible Code

    • “Innovation has dependably relied upon transparency and adaptability, so keep seeking both.”
    The genuine expert realizes that conveying capacity to the detriment of structure is a waste of time. It is the structure of your code that enables it to be adaptable. On the off chance that you bargain the structure, you bargain what’s to come.

    The essential presumption behind all product ventures is that the product is easy to change. On the off chance that this is not the case, then something is truly off-base.

    Many activities get caught in the mess of unyielding code. When engineers travel every which way, they further add to the slough of unyielding code and end up making a beast that cannot be revamped nor maintained effectively.

    The key here is recognizing the parts of the code that make it unyielding. When we discover those segments, we can fix them instead of further adding to the chaos. This matters more than due dates. Get the up front investment and make the best decision.

    Continuously pursue the guideline of “Unfeeling Refactoring.” Leave the code cleaner when you drop it, and if that implies accomplishing something “additional” from what you have been advised to do, do it.

    Always Be a Learner

    • “Build up an energy for learning. In the event that you do, you will never stop developing.”
    “I need to do a S/4HANA course, but the business isn’t supporting it.”

    “I need to learn Web Dynpro frames, but I am not ready to take time from my busy calendar.”

    “I need to go to that Codeathon, but it is a busy weekend.”

    All these are excuses not to learn. Your profession is your obligation. It isn’t your boss’s duty to ensure you are attractive. It isn’t your boss’s obligation to prepare you, or to send you to meetings, or to buy you books. These things are your responsibility.

    As a standard guideline, pursue the 40–20-hour rule each week. Use 40 hours for business. Then use 20 hours for your own learning. Use at least 60 hours of the week to develop a ceaseless learning society inside you.

    What’s more, 20 hours of the week isn’t troublesome. If you use your time carefully, you will discover that you have a lot of additional time just for yourself.

    Always remember that the product field is constantly showing signs of change, and it’s easy to turn into a dinosaur. That’s why you need to stay on top of it by putting resources into yourself and your continuous learning.
    “Self-instruction is available to everyone, but it is taken seriously by those who want a purposeful life.”

    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第441期:《一标签画 magicCss》

    30 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《一标签画 magicCss》
    今日推荐英文原文:《Micro-Frontends》

    今日推荐开源项目:《一标签画 magicCss》传送门:GitHub链接
    推荐理由:HTML 里一个标签能做什么?不管它之前能做什么,现在有人用它和 CSS 组合来画出了许多不同的图案。利用 CSS 中的伪元素和阴影等等就能够组合出相当有意思的图案来,比如两个圆和一个正方形组合出的心形以及圆角矩形和三角形组合出的对话框等等,可以从中学到不少实用的技巧。
    今日推荐英文原文:《Micro-Frontends》作者:Paul Sweeney
    原文链接:https://medium.com/@PepsRyuu/micro-frontends-341defa8d1d4
    推荐理由:有时尝试将前端也分为数个便于管理的部分会让你免去面对一个巨大系统无从下手的麻烦

    Micro-Frontends

    Do you have a large scale UI that takes way too long to rebuild? Do you have multiple teams and frequently run into code conflicts and other integration issues? Is an app responsible for too many features? Micro-frontends can probably help you here. Micro-Frontends takes the Micro-Services architecture concept from backend engineering, and applies it to frontend development. But how can splitting the UI into multiple frontends help to scale your project?

    In my previous job, from 2012 until 2016, I was the lead developer for a multinational company’s UI framework, where our team designed and implemented a micro-frontend architecture. In this article, I’ll share some of the benefits, and things I’ve learned through working with micro-frontends.

    What are Micro-Frontends?

    Micro-Frontends have no defined framework or API, it’s an architectural concept. The main premise behind micro-frontends, is splitting your app into several smaller apps, each with their own repositories, that are focused on a single feature. This architecture can be applied in many different ways. The architecture can be as liberal as possible, where each app could be implemented with different frameworks, or the it could be more prescriptive, providing tools and enforcing design decisions. There’s benefits and downsides to both of these approaches, and they largely depend on your organisation’s needs.

    An important thing to highlight with micro-frontends, despite splitting the app into several projects, they would be integrated together at the end into a Single Page App. To the end user, it would all appear to be one app, not many. Usually, there would be a parent runtime that would handle the lifecycle of each app, so that it gives the experience of a single page application. So you’re not losing anything in terms of user experience by implementing this architecture.

    When to use this architecture?

    In my opinion, the best approach to splitting an app is to split on unique sets of screens and features. Consider your mobile phone, on your phone, you have different apps with different features. You have a phone app for dialing, a messaging app for texting, and a contacts app for storing numbers. These apps often interop with each other, but they have very distinct purposes so they are implemented as separate apps.

    Another example, imagine you were developing an administrative system for managing a college. In such a system, you might have a page for managing staff profiles, student profiles, course details and timetables, distributing course materials, exam results, and so on. Each of these features might depend on each other loosely, but for the most part, they’re standalone features. It would be a perfect candidate project for implementing the micro-frontend architecture.

    Why use this approach?

    So we know what micro-frontends are, but why would you choose to use such a complex architecture? Here’s a few of the top reasons I believe why micro-frontends are valuable for large-scale development:

    Faster Builds

    The larger a project becomes, the longer it can take for that project to build. While bundlers such as Webpack and Parcel have went to great lengths to improve the performance of bundling through use of multiple threads and caching, in my opinion, it’s only a stop-gap measure for a more fundamental problem. Even with those performance improvements, as your app continues to grow, your app will progressively become slower to build. Remember that without a good developer experience, it’s difficult to provide a good user experience.

    By splitting your app into several different projects, each with their own build pipeline, each project will be very quick to build, regardless of how your system grows. The continuous integration system would also benefit from this, as each project can be parallelised and built indepedently and finally concatenated together at the very end.

    Dynamic Deployments

    In my opinion, one of the coolest features about micro-frontends, is the ability to deploy new features without the need to recompile anything else, including the runtime. If you have an existing system, rather than shipping and re-installing an entire system, you only need to ship and install the newest feature. This can be incredibly powerful, and opens up many new distribution possibilities. For example, if you want to license specific features of your system, you can split those features off into separate installers.

    Parallelising Development

    By splitting your UI into separate projects, this opens the possibility of having multiple teams working on the UI. Each team can be responsible for one feature of the system. One team for example could work on the phone app, while another team can work on the contacts app. Each team can have their own Git repositories, and can run deployments whenever they want, with their own versioning and changelogs.

    For the most part, these teams don’t need to know anything about each other. There still however needs to be a point of integration between these two apps. Each team needs to define and guarantee backwards compatibility on a public API. This API is typically implemented through the use of the URL.

    How to Implement?

    This article won’t be providing a detailed approach on how to implement micro-frontends, as there’s several different approaches, but here’s some of the important things you have to consider when implementing this architecture.

    Routing and Loading Apps

    In a normal app, typically a router with code-splitting support is used. Specific routes are defined, and they have their own import statements. In a micro-frontend architecture, this isn’t scalable. You don’t want to be in a situation where you have to define routes for every single app/feature in the system. This would make it difficult to deploy new features. The approach I would use instead, is to have the runtime handle instantiation of apps by listening to the URL:
    onRouteChange (route) {
        // Assuming routes are "/<app_name>/<internal-app-url>".
        let parts = route.split('/');
        let app = parts[1];
        let app_url = parts.slice(2).join('/');
    
        if (this.isRunningApp()) {
            this.suspendCurrentApp();
        }
    
        import(`/${app}/main.js`).then(app => {
            this.startApp(app, app_url);
        });
    }
    
    In the above code snippet, the runtime is only concerned about the first part of the URL, which we are assuming maps to the name of a folder on the system containing the code for that app. As we could already be running an app, we suspend that app first. We then import the code from a predictable folder structure, and then once loaded we start it passing the rest of the URL.

    Bundlers might get confused by that code snippet, and might throw an error or warning saying that it cannot find the file. You’ll need to configure your bundler to ignore that import statement.

    App Lifecycle

    As noted in the previous section, apps can be started, suspended, resumed, and exited to clear up resources. As such, you might want to consider implementing a lifecycle, similar to those found on mobile operating systems:
    class MyApp extends Application {
        constructor (args) {}
        onAppSuspended () {}
        onAppResumed () {}
        onAppQuit () {}
    }
    

    Communication between Apps

    Developers often asked me how to pass data between multiple apps. In a normal standalone app, typically we pass data as a series of props into components. That doesn’t work here, because each app/feature doesn’t have direct access to each other’s implementation anymore.

    For simple data, the URL can work perfectly fine here. The team responsible for an app/feature would implement a public URL API that they can guarantee backwards compatibility for. It’s similar to how apps communicate on operating systems such as Android, where you can register custom URL handlers with different intents. The operating system intercepts these URLs and loads the corresponding app passing in the data. That same principle can be used in micro-frontends. The runtime will intercept the URL, load the app, and pass the rest of the URL data into the app.

    However, there can be times when you need to pass data that’s incredibly complex and just can’t go into a URL. There’s still plenty of alternative mechanisms for apps to communicate. One such approach, is to temporarily store blobs of data onto a server, and using a temporary ID in the URL which an app can use to query that blob of data. This is more complicated, and requires careful management of the data on the server, but it offers more benefits beyond just simply passing data, such as persistence in the event of an accidental refresh or browser crash.

    Sharing Libraries

    One common issue that developers worry about with the micro-frontend architecture, is wasteful use of resources, with each frontend importing it’s own version of a framework. If using bundler defaults out of the box, yes, this will be a problem, but it doesn’t have to be.

    My personal recommendation, is to use shared libraries. These libraries would already be pre-installed on the system, and would be importable by all applications. Here’s an example of a folder structure you could use when deploying libraries onto a system:
    libraries/
        preact/
            8/
            10/
        components/
            1/
            2/
    
    What are the numbers? Those are the major versions of those libraries. If a library is following semver, by definition, it is backwards compatible if it has the same major version number. When an app wants to use a library, it would specify the major version it wants to use, rather than a specific minor and patch version.

    Benefits to this approach:
    • Caching — It’s more likely that your browser cache will be better utilised, as apps will be referencing the same library files, rather than multiple specific versions which would all have to be separately cached.
    • Updating—With this approach, if a library has a bug fix or there’s a slight UX change, all you have to do is deploy the new version of the library, and all of the apps, without recompilation, would automatically use the latest version because they’ve only specified the major version.

    What about Iframes?

    Don’t use them. Although the sandboxing might seem like a great idea, iframes can be a nightmare to deal with when it comes to navigation, and messaging with the parent frame.

    Instead, run all of the app logic inside the parent runtime. This would mean that you would have to be careful about global variables and CSS, but with strong linting rules, and useful helper functions, this shouldn’t be cause for much concern.

    In most apps, normally global variables are not too much of a problem. Might not be the cleanest thing to do in a lot of cases, but it wouldn’t have many side-effects.

    In the micro-frontend architecture however, globals have to be carefully controlled. Globals doesn’t only refer to variables or state, but it can also include things such as window/document event handlers, requestAnimationFrame loops, persistent network connections, anything that can be actively running despite the app no longer being in the DOM. It can be incredibly easy to forget that these things can leak, and that they require proper tear downs.

    Conclusion

    Micro-frontends are not for every project. I believe for the vast majority of projects out there, code-splitting is likely more than enough. The micro-frontend architecture is more suitable for large enterprise-level applications with swathes of functionality. When starting a project, consider how large the project might end up becoming. If you feel like the project is going to have inevitable scalability issues, consider using this architecture to solve that problem.

    Thanks for reading!
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
←上一页
1 … 148 149 150 151 152 … 262
下一页→

Proudly powered by WordPress