• 开源镜像
  • 开源沙龙
  • 媛宝
  • 猿帅
  • 注册
  • 登录
  • 息壤开源生活方式平台
  • 加入我们

开源日报

  • 开源日报第436期:《欲知此事须躬行 simple-computer》

    25 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《欲知此事须躬行 simple-computer》
    今日推荐英文原文:《Getting to Know Natural Language Understanding》

    今日推荐开源项目:《欲知此事须躬行 simple-computer》传送门:GitHub链接
    推荐理由:But How Do It Know?——这是一本介绍计算机如何工作的书。而这个项目则是项目作者为了模拟书中提到的计算机而创造的。虽然自己从简单到复杂的实现它是相当麻烦的一件事情,但是从中可以学到一些只有在实践中才能获得的知识,在获得新知识之后,在实践中检验它是最好的。纸上得来终觉浅,欲知此事须躬行。
    今日推荐英文原文:《Getting to Know Natural Language Understanding》作者:#ODSC – Open Data Science
    原文链接:https://medium.com/@ODSC/getting-to-know-natural-language-understanding-f18a0dc5c97d
    推荐理由:关于自然语言处理的简介

    Getting to Know Natural Language Understanding

    We like to imagine talking to computers the way Picard spoke to Data in Next Generation, but in reality, natural language processing is more than just teaching a computer to understand words. The subtext of how and why we use the words we do is notoriously difficult for computers to comprehend. Instead of Data, we get frustrations with our assistants and endless SNL jokes.

    Related article: An Introduction to Natural Language Processing (NLP):https://opendatascience.com/an-introduction-to-natural-language-processing-nlp/

    The Challenges of AI Language Processing

    Natural Language Understanding (NLU) is a subfield of NLP concerned with teaching computers to comprehend the deeper contextual meanings of human communication. It’s considered an AI-hard problem for a few notable reasons. Let’s take a look at why computers can win chess matches against world champions and calculate billions of bits of data in seconds but can’t seem to grasp sarcasm.

    Humans Make Mistakes

    The first obstacle is teaching a computer to understand despite typos and misspellings. Humans aren’t always accurate in what they write, but a simple typo that you could skip right over without missing a beat could be enough to trip up the filters for computer understanding.

    Human Speech Requires Context

    We mentioned sarcasm above, but understanding the true meaning of utterances requires a strong understanding of context. Not only do sarcastic replies affect the outcome but not every negative utterance involves the presence of an explicitly negative word. To ask “How was lunch?” and receive a reply “I spend the entire time waiting at the doctor” is clear to you (lunch was bad) but not necessarily to a computer trained to search for negative words (no, not for example).

    Human Language is Irregular

    Language understanding also requires input from variances in the same language. British English and American English have overall similarities, but a few things different, including spelling and meaning, can trip up a computer. And those are just two of the many, many versions of English, which in itself is a non-standard language and still remains the most parsed language in all of NLP. What about the others?

    Related article: The Promise of Retrofitting: Building Better Models for Natural Language Processing:https://opendatascience.com/models-for-natural-language-processing/

    What Is Natural Language Understanding?

    Natural Language Processing is the system we use to handle machine/human interactions, but NLU is a bit more narrow than that. When you’re in doubt, use NLU to refer to the simple act of machines understanding what we say.

    NLU is post-processing. Once your algorithms have scrubbed the text, adding part of speech tagging, for example, you begin to work with the real context of what’s going on. This post-processing is what starts to reveal to the computer the true meanings of text and not just surface understanding.

    NLU is a huge problem and an ongoing research area because the ability of computers to recognize and process human language at human-like accuracy has an enormous possibility. Computers could finally stand in for low paid customer service agents, capable of understanding human speech and its intent.

    In language teaching, students often complain that they can understand their teacher’s language, but that understanding doesn’t transfer when they walk outside the classroom. Computers are similar to these language students. When researchers formulate test texts, for example, they may unconsciously formulate them in ways that avoid those three common problems above, a luxury not afforded in a real-world context. A Twitter user isn’t going to scrub tweets of misspellings and ambiguous language before publishing, but that’s precisely what the computer must understand.

    The subfield relies heavily on both training lexicons and semantic theory. We can quantify semantics to an extent as long as we have large amounts of training data to provide context. As computers consume this training data, deep learning begins to make sense of intent.

    The biggest draw for NLU is a computer’s ability to interact with humans unsupervised. The algorithms classify speech into a structured ontology, but AI takes over to organize the intent behind the words. This method of deep learning allows computers to learn context and create rules based on more substantial amounts of input through training.

    What Are The Implications?

    Aside from everyone having their very own Data? Cracking Natural Language Understanding is the key piece of computers learning to understand human language without extraordinary intervention from humans themselves.

    NLU can be used to provide predictive insights for businesses by analyzing the unstructured data feeds of things like news reports, for example. This capability is especially true in areas such as high-frequency trading where trades are handled by automated systems.

    Unlocking NLU also rockets our AI assistants like Siri and Alexa into what finally counts as real human interaction. Siri still contains numerous errors exploited for humor by places like SNL, and those errors plague developers in search of human-like accuracy. If developers want off the SNL joke series, cracking AI is the key.

    Humans are still reigning champions for understanding language despite roadblocks (mispronunciations, misspellings, colloquialisms, implicit meaning), but the NLU problem could unlock the final door we need for machines to step up to our level.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第435期:《美学 WebGL-Fluid-Simulation》

    24 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《美学 WebGL-Fluid-Simulation》
    今日推荐英文原文:《Staying Focused and Understanding Trade-offs》

    今日推荐开源项目:《美学 WebGL-Fluid-Simulation》传送门:GitHub链接
    推荐理由:一个玩起来简单的小游戏项目,通过鼠标的拖动在屏幕上创造色彩,简单但是意外的吸引人。在操作面板里可以控制诸如背景颜色这样的效果,或者截个图作为美学的纪念——不管是精心思考之后得来的巧妙花纹还是在不断使用随机生成时偶遇的奇观景象。
    今日推荐英文原文:《Staying Focused and Understanding Trade-offs》作者:Sean Watters
    原文链接:https://medium.com/better-programming/staying-focused-and-understanding-trade-offs-4ec9d9be315d
    推荐理由:区分开那些优先级高和优先级低的任务,把精力集中于真正重要的事情中

    Staying Focused and Understanding Trade-offs


    While this post is focused on software development, I have also found these ideas useful in other contexts. In fact, I have developed these perspectives primarily outside of a software context.

    Every software project comes with a unique set of challenges. To ensure success, it’s important that your project has a clearly defined purpose.

    Narrow Your Focus — Separate The Signal from the Noise

    In most contexts people talk about goals in general terms: “I want to build an app that changes the world”. However, when it comes to execution, there are some common deficiencies which trip people up:
    • Difficulty determining the steps necessary to execute efficiently
    • Inability to accept sub-optimal, but adequate and necessary, solutions
    • Struggling to change course and discard plans that are no longer valuable or necessary —— even if that means most or all of them.
    Smart engineers get stuck on seemingly important steps all the time. It can be incredibly distracting when you care deeply about specific execution or implementation, but the fact is most cases are not life and death in software! Often, deliverables that are no longer necessary or relevant can dominate and derail the overall project or goal of the company. When engineers over-invest in details which are no longer necessary, or are inessential to the objective, valuable time and resources are wasted, jeopardizing the project’s success. Recognizing when ‘important’ steps have become ‘distracting’ steps, is an important super power.

    Note: it is important to understand that ‘distracting’ tasks are not neutral — they are always in the way of fixing real problems.

    Prioritization

    Prioritizing tasks and understanding the difference between signal and noise go hand and hand — effective prioritization has strong dependency on a clearly defined objective.

    Difficulties with prioritization often stem from a lack of understanding of how the feature or task relate to the overall project objective. This is a good example of poor prioritization: feeling the need to build your own framework so that it meets the exact needs of the application in the most efficient way possible, before you even have an MVP or have validated the market for the thing you are building. Unless your product is web frameworks, it would be misguided to spend that energy on it this early. There isno need to for an infinitely scalable framework if you only have 4 users.

    Task prioritization should be ordered by value generated over effort necessary to deliver on the task, with varying value weight placed on short term vs. long term implementation.

    For a brand new startup, prioritizing longevity is important and writing bad code is never a good idea, but spending weeks polishing your backend code base that already works, while your UX is suffering due to poor front-end design, is poor prioritization. On the other end of the scale, a large company with high traffic volume failing to address scalability over minor UI changes with no tangible value generated, is also poor prioritization.

    The majority of priority decisions boil down to trade-offs. Remaining intentional about keeping your overarching goals in mind and understanding the implications of the trade-offs you make is fundamental to stable growth.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第434期:《黑历史大扫除 DeleteFB》

    23 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《黑历史大扫除 DeleteFB》
    今日推荐英文原文:《On Specialism vs. Generalism》

    今日推荐开源项目:《黑历史大扫除 DeleteFB》传送门:GitHub链接
    推荐理由:今天你终于要开始工作了,你决定给公司里的同事们留下个可靠的好印象,然后你很快注意到了自己用了很久的那个脸书帐号——里面有些东西你自己看了都想掘地三尺把自己埋掉,更何况新同事,那会让他们对你印象崩坏的。这个项目就能够帮你清扫账户里你写过的一切而不需要删除你的账户,虽然脸书可能已经把它们都已经备份了,但是不管怎样,最起码你不需要担心有人看到那些黑历史了。
    今日推荐英文原文:《On Specialism vs. Generalism》作者:Bryan Irace
    原文链接:https://medium.com/better-programming/on-specialism-vs-generalism-and-not-being-an-ios-engineer-anymore-ed5fa65ed76e
    推荐理由:专家与多面手同样重要——一个在某个方面拥有丰富的知识,而另一个则拥有更广泛的视野。

    On Specialism vs. Generalism

    Dealing with “not being an iOS engineer anymore”

    “You’re basically not going to be an iOS engineer anymore?”
    When my good friend Soroush asked this upon hearing that I had taken a new job at Stripe, I doubt he thought very much about how it’d be received. However, it really threw me for a loop. I didn’t exactly consider myself to be undergoing a career change, but was I? It’s true that I’m not going to be developing for iOS in my new role, but I hadn’t always worked in this capacity at previous jobs either. Did spending the better part of five years focused on iOS make me an “iOS engineer”? If so, when exactly did I become one and when did I subsequently cease to be? Is the designation based on one’s actual day-to-day or describing the work being primarily sought out and anticipated?

    When you work as a software engineer long enough, it’s highly likely that you’ll end up having to decide whether or not to specialize in a particular sub-discipline or work in a broader role with general knowledge spanning several disciplines. There’s no right or wrong decision here, and it’s really not a strict dichotomy anyway.

    While programmers can undeniably be either specialists or generalists, there’s a whole lot of grey in the middle. As opposed to inherently being a specialist, it’s also very common to specialize over a period of time. Perhaps this is a subtle difference, but I think it’s one worth teasing apart; one can act in a specialist capacity when the situation dictates — and I presume that effectively every “generalist” does, from time to time — without self-identifying as such for the long haul.

    There isn’t a right answer because one isn’t better than the other, but also because many teams should contain both specialists and generalists in order to perform their best work. The best products are often brought to fruition through a combination of generalist thinking and specialist expertise.

    Specialists with the domain knowledge necessary to build best-in-breed software that takes full advantage of the platform being built for are tremendous assets to any team. Given how advanced platforms have become, it’d be near impossible to have a firm grasp on all the details without having first dedicated yourself to fundamentally understanding a particular platform’s intricacies.

    At the same time, specialists run the risk of “only having a hammer,” and as such, having every possible project “look like a nail.” With only one tool in your belt — a deep but relatively narrow area of expertise — it’s easy to inadvertently build an app that really should’ve been a website or vice versa. Or to have a great idea that you can’t quite realize, despite your excitement, due to it requiring both frontend and backend work. Said idea might be exactly the provocation that can prompt one who has historically specialized to start branching out a bit. But after having done so, are they still “a frontend developer” or “a backend developer”? Clearly, such labels start to lose their significance as we tear down the boundaries defining what we’re able to do, and perhaps more importantly, what we’re interested in doing.

    In the Twitter, Slack, and GitHub circles that modern software developers often travel in, it’s easy for a discrepancy to form between how one is best known vs. how they actually view themselves. Tumblr was quite popular during the time that I led iOS development there, which gave me the opportunity to write and speak about the work that we were doing, and even release some of it as open source. These slide decks and blog posts neglected to mention that I was actually hired to be a web developer and only moved over to iOS as needs arose, subsequently parking myself there for the next few years. I built Rails backends and React frontends at my next job, but at an early-stage company with a much smaller platform, where we primarily worked heads-down without much outward-facing evangelism for our technology.

    I’m not unique in this regard. One of the best mobile developers from my time at Tumblr has since switched over to the web. Another, a specialist in animations, gestures, and UI performance, is now a designer. Acting as a specialist at a high-profile company can cement your status as such well after you’ve stopped working in that capacity, it’s crucial not to let outside perception prevent you from shaping your career however you see fit.

    In August 2014, I gave a talk entitled Don’t be “an Objective-C” or “a Swift Developer” to a room full of new programmers who were learning how to build iOS applications at the Flatiron School. The Swift programming language had been unveiled only two months prior, and reactions amongst iOS developers were divisive, to say the least. Many felt as though it was finally time for a modern language to replace Objective-C, and that such a change was long overdue, while others didn’t believe that Objective-C needed fixing, and would’ve preferred if Apple’s resources and the focus of its community were directed elsewhere. My goal was to try and convince these new engineers that they shouldn’t aspire to land in one camp or the other, but rather to learn the underlying, transferrable programming concepts, and to expose themselves to many different ways of concretely building software. Without understanding what’s out there, how can one make an informed decision as to how they should spend their time? Even if you decide to put down roots in a single community, how can you avoid perceiving the way that that community has historically operated as being the way that it should be going forward?

    I feel like I could give this same talk today to a set of engineers and simply replace “Objective-C and Swift” with “frontend and backend” or “mobile and web.” The idea is the same — technologies move fast and careers are long, and while you may enjoy being a specialist or a generalist for some time, you never really know when your situation could change and when circumstances may warrant otherwise. Or, when you might simply feel like trying something new.

    When I write Ruby, it’s painfully obvious to me that I don’t know Ruby to nearly the same extent that I know Swift. On some days, this makes me sad, but it just as often makes me feel empowered. Perhaps I’ll decide to spend the time needed to achieve Ruby mastery, or maybe I’ll end up retreating back to Swift at some point in the future. More realistically, I’ll get slightly better at the former and slightly worse at the latter and come to peace with that, just in time to shift towards learning something different altogether. In any case, how others describe what I do, and more importantly, how I view it myself, remains a fluid work in progress.

    I don’t expect this to change, and this I am at peace with.
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
  • 开源日报第433期:《网页里写 Markdown remark》

    22 5 月, 2019
    开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
    今日推荐开源项目:《网页里写 Markdown remark》
    今日推荐英文原文:《Humans, Machines, and the Future of Education》

    今日推荐开源项目:《网页里写 Markdown remark》传送门:GitHub链接
    推荐理由:兴许你觉得把 Markdown 一个个分开写太麻烦了,你只是想简单的写一个 Markdown 的幻灯片来展示,那么这个项目就能够起到作用了。这个项目让你能够在 HTML 文件里直接将 textarea 标签作为 Markdown 的区域来编写,你需要的样式也在上面加进去,然后再调用函数将其转换为幻灯片,最后在浏览器里打开,就能够开始展示了。
    今日推荐英文原文:《Humans, Machines, and the Future of Education》作者:Jonathan Follett
    原文链接:https://towardsdatascience.com/humans-machines-and-the-future-of-education-b059430de7c
    推荐理由:人工智能的加入兴许会让教育的效果更好——更个性化的学习方向,或是更好的学习体验。

    Humans, Machines, and the Future of Education


    Figure 01: Education became formalized to accelerate our understanding of difficult, complicated, or abstract topics. [Illustration: “The School Teacher”, Albrecht Dürer, 1510 woodcut, National Gallery of Art, Open Access]

    How do humans learn? Our first learning is experiential: responding to the interactions between ourselves and the womb around us. Experiential learning even extends beyond the womb, to our first teachers — our mother and other people who imprint upon us in ways intentional or not — singing a song and moving themselves. In fact, much of the learning we’ve done over human history is this stimulus-response model of simply living, and, in the process, continuing to learn. This inherent model of experiential learning is well and fine for any number of things — such as how to act toward other people, how to hoe a row, or where to find fresh water. It is less effective, however, in helping us master more nuanced things such as grammar, philosophy, and chemistry. And it is entirely inadequate to teach us highly complex intellectual fields, such as theoretical physics. That is one reason why education became formalized — to accelerate, or even merely make possible, our understanding of difficult, complicated, or abstract topics.

    In the west, formal education reached its modern pinnacle in the university, whose origins go back to the 11th century in Bologna, Italy. Rooted in Christian education and an extension of monastic education — which itself was adopted from Buddhist tradition and the far east — higher education was, historically, an oasis of knowledge for the privileged few. Integrated into cities such as Paris, Oxford, and Prague, while the university was its own place and space, it lived within and as part of a larger city and community. The focus was on the Arts, a term used very differently than we think about it today and including all or a subset of seven Arts: arithmetic, geometry, astronomy, music theory, grammar, logic, and rhetoric.

    The university experience of today looks very different. The number of subjects has exploded, with “Arts and Sciences” representing just one “college” within a university — and one that, for some decades, has been diminished in importance compared to more practical and applied knowledge such as business and engineering. A higher proportion of citizens than ever now attend an institute of higher learning, with knowledge having been democratized substantially even as it becomes increasingly essential to merely eking out a simple living. And, in most cases, the religious roots of universities in general or your school in particular have been hidden or downplayed as our world becomes increasingly secular. These changes have taken place in different paces and ways over about a thousand years. While the differences are significant and reflect the ways in which knowledge and civilization have evolved, given the magnitude of time involved, we might even consider the changes modest.

    Figure 02: Historically, an oasis of knowledge for the privileged few, the university was integrated into cities such as Paris, Oxford, and Prague. [Illustration: “University”, Themistocles von Eckenbrecher, 1890 drawing, National Gallery of Art, Open Access]

    Of course, in light of the rapid changes in technology of recent decades, and its effects both economically and socially, formal education is woefully out-of-date. Designed in an analog world and forcing everyone to take the same curriculum, where different subjects were taught more-or-less the same way, formal education is conducted clumsily at best. A variety of different philosophies and pedagogy have emerged that introduce different methods of learning, but these are not evenly distributed. The standard method of education can be mind-numbingly rote — the cavernous lecture halls with hundreds of students for each professor. Regardless of these barriers, we aspire to a better way. The next wave of significant innovation in education looks like it will be extreme. In the upcoming decades we can expect the confluence of AI and other emerging technologies, and ever-increasing knowledge about ourselves, the human animal — how we learn, how we live, how we participate in a society of connected people — to revolutionize education.

    How will education change in a world of emerging technologies? How will AI assist, enhance, administer, regulate, and otherwise alter the complex interactions that drive human learning? By examining some of the gaps in the current model of formal education, we can see some of the places where such technology can and will play a significant role. In fact, the breadcrumbs for these changes are already here, right before us.

    AI and the Path to a Personalized Curriculum

    Ben Nelson, the Founder of Minerva Schools at KGI, espouses a connected education, one that leverages the information sharing power of technology, with a systems thinking approach that sees the importance of connecting each part of the experience together. Minerva Schools’ philosophy and structure is an inspiring glimpse into the future of education. “In the education system of the future, online artificial intelligence and virtual reality platforms will become very important in the transmission of knowledge. Education should teach us to be more flexible and provide tools for transformation,” says Nelson, during a panel discussion on the future of education organized by ESADE Business and Law School. According to Nelson, such a paradigm shift “will lead to a more personalised learning experience, a better user experience for each student.”

    Some of the first steps in customizing curriculum on a per student basis are happening in the field already. There are a number of companies developing AI-driven software for the education market with this purpose in mind. And while the artificial intelligence may provide an initial assessment and learning assignments, there is an important element of human collaboration as well. Teachers can work in concert with these tools, and can modify and even override the recommendations to better suit the student. For example, Knewton, a New York based e-learning company, uses AI-enabled adaptive learning technology to identify gaps in a student’s existing knowledge and provide the most appropriate coursework for a variety of subjects which include math, chemistry, statistics, and economics. More than 14 million students have already taken Knewton’s courses. EdTech software from Cognii, uses conversation — driven by an AI-powered virtual learning assistant — to tutor students and provide feedback in real time. The Cognii virtual assistant is customized to each student’s needs. And math focused Carnegie Learning from Pittsburgh has developed an intelligent instruction design that uses AI to deliver to students the right topics at the right time, enabling a completely personalized cycle of learning, testing, and feedback. These examples, while nascent, are indicative of a push towards AI-enabled personalized learning, which over time will change the face of formal education.

    Optimizing the Educational Experience, From Passive to Active Learning

    Personalization, however, is just the tip of the iceberg when it comes to changes in the delivery of education. For insight into how formal education will be further transformed in the future, we spoke with Nelson about Minerva School’s learning philosophy: “If you think about a traditional education in a college or university, you think about it in a conglomeration of really independent units,” says Nelson. “You take 30 courses while you’re in school. There are 30 different professors who teach these courses. Those professors really don’t coordinate with one another very much. In fact, they have no idea what the makeup of their student body is, within their particular class. Maybe certain students will have taken courses XYZ. Other students will have taken courses ABC. … The nature of that education is very much on a unitized basis.”

    Nelson sees the university approach — a lecture-based format oriented towards the dissemination of information — as one desperately in need of innovation. “Students come in [and] a professor speaks for all or the overwhelming majority of the time. Even when a professor will take questions, the lecture effectively passes from one professor to one student. The majority of students are sitting passively in class,” says Nelson. “There are two problems with these two models. Curricularly, from a curricular design perspective, the world doesn’t work in discrete parts of subject based knowledge.”
    “The world isn’t divided into physics in isolation from biology in most cases, or politics isolated from economics, in all cases, etc. The learning of discrete pieces of information isn’t very much related to the way the world works. It’s also, by the way, not related to the way people think.”
    “When we think about somebody who is wise or can think about appropriate applications of practical knowledge to particular situations, we think of somebody who has learned lessons in one context, and applies them to another,” says Nelson. “When you deliver education is discrete packages, it turns out the brain has a very hard time with understanding that. Secondly, when you’re sitting in an environment where you’re passively receiving information, the retention in the brain of that information is minuscule. Study after study has shown that a typical test and lecture based class, within six months at the end of the semester, students have forgotten 90% of what they knew during the final, which basically means it’s ineffective. [At Minerva,] we change both of these aspects.”

    Nelson describes Minerva’s approach to curriculum architecture, which, is supported and delivered by their tech platform. “First, we create a curriculum and a delivery mechanism that ensures that your education isn’t looked at on a course by course basis, but is looked at from a curricula perspective. The way we do that is that we codify dozens of different elements, learning objectives that we refer to as habits of mind or foundational concepts. Habits of mind are things that become automatic with practice. Foundational concepts are things that are generative, things that once you learn, you can build off of in many different ways. Then these learning objectives get introduced in one course in a particular context. They then get presented in different contexts in the very same course, and then they show up in courses throughout the curriculum in new contexts again, until you have learned generalizable learning objectives. [This] means that you have learned things conceptually and the ways to apply them practically in multiple contexts, which means that when you encounter original situations, original contexts, you’ll be able to know what to do in those situations.”

    “The piece of technology that we have deployed in part, but are going to continue to build on and work on the very near future is this idea of the scaffolded curriculum — introducing a particular learning objective, and then tracking how that learning objective is applied and mastered across 30 different professors in four years,” says Nelson. “This doesn’t sound like such a radical improvement, but it fundamentally changes the nature of education.”
    “That, by the way, can only be done with technology,” says Nelson. “Without technology, you cannot track individual student progress and modify their personalized intellectual development in a classroom environment. You need to have the data. You need to have data in a way the professor can react and do something with it. Without technology, it’s just impossible. It’s impossible to collect the data. It’s impossible to disseminate it. It’s impossible to present it to the professor in real time, in a format which they can use it.”
    While connected technologies are an enabler of this approach, just as important is the knowledge of and will to implement better approaches to learning. “We make sure that 100% of our classes are fully active,” says Nelson. “What does fully active mean? It means that our professors aren’t allowed to talk for more than four minutes at a time. Their lesson plans are structured in such a way that the professor’s job is really to facilitate novel application from students on what they have studied … and actually use class time to further the intellectual development of the students. We do this because we’re able to do this because we’ve built an entirely new learning environment.” Two years after the end of an active learning class, students retain 70% of the information, in contrast with a mere 10% retention rate for lecture and test based classes. “Minerva courses are extremely engaging,” says Nelson. “They’re very intensive. They’re integrative, in the sense that you bring together different areas and fields together, and they’re effective.

    Minerva’s educational environment and classes are conducted entirely online via live video. “All of the students live together, but the professors are all over the world. We hire professors to be on our staff full time, but we don’t let geography constrain them, which is another beauty of having a platform that enables close interaction between professor and student. That allows for two things. It allows for our professors to be the best in the world in teaching their subjects, and it enables the students to change their location. That’s why, at Minerva, students live in seven different countries by the time they graduate, because they don’t have to take the faculty with them. The faculty is accessible anywhere you are in the world. It gives our students the opportunity to both have a very deep formal education, as well as the ability to apply that in multiple contexts in the real world.”

    From Active Learning to Augmented Reality

    Virtual and augmented reality are emerging technologies that have exciting educational applications — ones that align well with the idea of active learning espoused by Nelson and others. Stephen Anderson, a design education leader who is Head of Practice Development at Capital One, sees emerging tech like augmented reality as the next step to creating an environment of continual learning and positive feedback loops.

    Figure 03: VIrtual reality has the potential to immerse us in new learning environments and help us cultivate our sense of curiosity. [Photo: by Scott Web on Unsplash ]

    “We have this progression based approach where we move kids through the same grade levels at the same age, expect them to learn the same material. And it’s very industrial or organizationally oriented. We will treat all learners as the same. Move them through and they’ll graduate by this age with this amount of knowledge and oh, by the way, they have to cover this material in this year. And I understand why we’ve arrived at that model because it’s an easy one to scale,” says Anderson. “But we know it’s not effective, right? It’s not the best way to teach people. And the best way to teach people is nothing new. It’s been around since at least Maria Montessori in the late ’80s, ’90s, where it’s much more about the learner. It’s much more about cultivating a sense of curiosity about the world [as well as an] interest in learning and teaching yourself. You see it play out in Montessori programs around the world … their offshoots or various things like the Waldorf school, where it’s almost at the other extreme—where there’s not a concern with being comprehensive and covering all the concepts. It’s more about hands on, active engagement with things, encouraging or project based, inquiry based, all these things.”

    Anderson gives us an active learning example demonstrating how augmented reality might help us learn in a hands on way, in a wide variety of contexts — even in day-to-day tasks like cooking in our homes. “… As prices come down in cameras and projectors, imagine if every light bulb in our house could project onto a surface and also see interactions. So, now virtually all surfaces become interactive. So, you can be [in the kitchen] at your cutting board and cutting something and getting feedback [from the learning system] around like this slice of meat should be a little bit thinner or thicker. … You see these timeless ideas of feedback loops and interactivity and playfulness,” says Anderson.

    Nelson too sees realities and future trends that will emerge, but aren’t here yet, as he talks about emerging technology and Minerva: “I believe that some of the real opportunities in the future are going to be where augmented reality will effectively replace the need to be on a have a laptop type interface. I could imagine augmented reality where you have a classroom of students and a professor in one place, or where you have actually 30 students in 30 different parts of the world having an immersive real life experience with a professor with the data overlay. I think when you have the opportunity for education to remove boundaries and constraints, you can all of a sudden think very differently about what the nature of what education should be. That empowers humans to come up with solutions that are far, far, far more advanced that what most universities are currently doing.”
    下载开源日报APP:https://opensourcedaily.org/2579/
    加入我们:https://opensourcedaily.org/about/join/
    关注我们:https://opensourcedaily.org/about/love/
←上一页
1 … 150 151 152 153 154 … 262
下一页→

Proudly powered by WordPress