Point 1: Open Source projects are the default case for organizations.

Explicit reasons are required not to make an application open source.  This is for all participating organizations and attendees from large corporate entities, Microsoft, to one person consultant organizations.  This is a monumental change with respect to .NET organizations as this was the last large community to defend the closed source model.  This does not mean that the data is processed through the system is open source or available as in the case of Software as a Service.

Metaphor that exemplifies this is you can know the recipe and have all the ingredients, but the Chef makes the meal.

Point 2: GitHub is synonymous with Open Source and Technology

Competitors of all sizes and abilities have come and gone, but GitHub has prevailed.  The community size, activity, and features of GitHub overwhelms all.  Developers are on GitHub and they expect your company to be as well.

Point 3: Desktop applications are out.  Organizations that wish to continue with Desktop applications will be doing all the work themselves.

Expect no help, no support, no incentives, and no developers to join you when you continue on this path.  Microsoft is not going to help you and is actively persuading customers to switch to Software as a Service design, Web applications that exist in a cloud environment, or applications that are designed for mobile device use.

The benefits are that the best and largest group of developers are using cloud technology and so the tools, practices, and infrastructure is tested every day and in every way possible ensuring that only the best product survives and influences the future of technology.

Applications that exist in curated stores will have a role going forward i.e. Windows Store, Office 365 Store, Android Store, Apple Store, etc.

Directly from Microsoft CEO and all subordinates
**“Cloud first, mobile first”

The cloud provides a developer with access to more capabilities than they would be able to obtain on their own and the companies that host can add features seamlessly with a guaranteed stream of revenue versus the volatile product sales upgrade cycle.

Examples**

  • Instant scalability
  • Proven and documented examples on how to build/test/maintain/distribute your application.
  • Any API Capability on demand
  • Artificial intelligence
  • Machine learning
  • Burst processing
  • In summary the list goes on to whatever you can imagine there is an API for it.  For example http://star-trek-the-next-generation.wikia.com/api/v1/
  • Worldwide distribution
  • Testing variants previously unavailable or unaffordable i.e. how would a million requests a minute impact my service/website.  This is a trivial test to setup because of the work of others
  • Security and encryption of the highest level
  • Reliability/Availability Limited survey of attendees, no one is doing desktop applications and the question was regarded as a problematic way to distribute, collect revenue, deliver features, and obtain talented developers to continue a product. DotNetFringe conference was going on at the same time as the Microsoft Worldwide Partner Conference (https://partner.microsoft.com/en-us/wpc).  The WPC conference had no sessions on Desktop application development, additionally the top technical talent from Microsoft .NET was at DotNetFringe not WPC (seems WPC is geared as a business/sales conference)

Point 4: Isolate your applications

Isolation and consistent setup for transfer of an application from

  • Developer -> Developer
  • Developer -> Test
  • Test -> Production

Is simplified, verifiable, and consistent by the use of a container and in this case Docker.

Side benefit of using Docker is that unlike a Virtual Machine the Operating System of the hypervisor running the containers is not duplicated saving memory and disk space.  Case in point a small Windows Server  2012 and up virtual machine would run with 2 GB or more of RAM, but a Docker container consumes only the memory it needs so it starts out at 1/10th of that size (200 MB) and grows/shrinks as needed.

With Linux containers the size of the Docker container is a factor smaller allowing you to run containerized applications with less overhead and commitment.  This practice is default standard in cloud systems going forward as it reduces your cost/consumption on both the provider (cloud host) and client side (cloud client).  This technology is used by datacenters.

Point 5: Testing and deployment can be simpler.

All of these open source projects are successful and guided by a multitude of developers in various places throughout the world and with various skill levels.  In this environment building a stable continuous integration test suite for code is essential to keeping projects running and alive without financial motivation and limited developer resources.

When corporations use it only improves performance and developer ability in that deployments and basic testing can occur without the stress of say a timed manual deployment.  The stress of handoffs to stages, code versioning, basic testing if system works, and other elements that go along with staged deployments are stressful not only to developers, but to users and infrastructure engineers.

Using continuous integration point is to automate these stresses away.  Deployments occur when code is checked in and version at certain predefined levels is the same as in coding -> dev server -> test passes -> stage server -> stage tests -> production server -> production tests.  If there are problems then patches can flow normally forward and this then avoids the need to keep coordination amongst items that are not automatically linked causing the stress of will this be available, what will happen if this version is deployed, and how do you get this patch in this version.

Point 6: Scrum is best served with continuous improvement

Organizations that strive for continuous improvement gain orders of magnitude of productivity with experimentation and evaluation of new processes and avoid being trapped in legacy architectures and practices as the technologies and practices are constantly in flux.  Example case in point the best way to avoid drowning would be to learn to swim, but that won't work forever so maybe we should build a boat, or a bridge, or etc.  Situations change and re-evaluation and experimentation will bring better approaches.


Example:

Standard: Why is this process like this and the answer is it works or it is required
Knowledge seeking option:  What if we did it this way?  Is there a better way?
In our case we have Sprints and they have a clear demarcation of a start, a planning meeting, but there is no consistency or option to evaluate the Sprint without an accompanying closing demarcation of an end.
To do this an organization should hold retrospectives as often as they have Sprint planning meetings and they should not be on the same day as without time to implement changes they will exist as pure ceremony instead of a meaningful time to make improvements.

Prediction: Windows Server 2016 will be the last product version of server and you will be forced to upgrade if running locally

Windows Server will continue in some form forever in Azure albeit it may be relegated to legacy operating systems, but as a standalone product in which you have upgrade cycles 2016 is the end of that.  In the same manner as Windows 10 is the last productized version of Windows I anticipate the same for Windows 2016 given the product cycle and management of Microsoft.  Additionally there is no way to turn this around.  The process started and those that objected to Software as a Service and opposed cloud are gone and will never return.  If you do not want to follow this model Microsoft encourages you to switch to Linux as their modern .NET Core runs on Linux.

To avoid issues the encouragement to containerize your applications will make your code available whatever form the future of Windows is server, client, etc.
Notes from sessions:

  • Keynote – Communicate with respect and courtesy in open and closed source.  Attempt to make common level of interaction with non-developers by establishing availability and openness.
  • F# Machine Learning – Machine learning programming is a different paradigm of programming.  Traditional programming you construct in an order set of goals through methods and practices i.e. building blocks.  Machine learning programming is a scientific approach in which you postulate theories and then evaluate mass experiments until you arrive at a reasonable model.  You don't know the building blocks or the process only the inputs and what you are striving for.  Gives you the ability to attack previously unsolvable problems and build efficient models from observations instead of attempting to plan the unforeseen.  Will give you unexpected results and insights that are incredibly difficult to ascertain from traditional programming.
  • Docker Containers for 4.6 Applications – Windows 2016 includes full container support and can run Docker natively.  These mean any Docker containers can run, however there are special Windows native containers also available that can run legacy applications.  Demonstration showed IIS in a container running .NET 4.6.1.  These containers could be isolated with legacy desktop applications I think as well.
  • OAuth – 2.0 gives simplification on system along with the OpenID identification integration.  No attendees I spoke with used Windows Authentication as it is insecure and inefficient.  JavaScript Web Tokens were on display again as the easiest manner to handle authorization and service Auth0 was a sponsor and many attendees used it.
  • Domain Driven Design – Create a workflow based on set of events in the past to create a pipeline for implementation.  This can be analogous to a User Story or a Feature in an application, but the events are always things that occur and the work of aligning them is done by the group all at once to avoid missing perspective from the actual user and questions from the developers.  No separation here, all involved parties should attend from the developers writing the code to the users that will use the system and the business budget participants.
  • Pipeline plugin architectures without state - Modular plugin architecture that flows in a pipeline system architecture will yield maintainable and robust code.  Avoid state between components or if unavoidable publish new state to each component instead of moving the global state along the pipeline.
  • EventSource Microservices – Design architecture to lock legacy code into a contained and immutable state with precise and dedicated output events which new code will use to add features.  Locking the legacy application means no longer having to work with legacy architecture removing all the constraints and allowing for modern techniques and approaches.  Handle the defined events and work with those.
  • Bobcat Installer – Software company doing manual installations stopped process and built a new background windows installer.  Not open source as of yet as they need to clean out hardcoded passwords, but will be in less than 2 months and I was given a preview of the source and spoke with the developers.  Clean and extensible.
  • Apache Spark – Microsoft Open Source Mobius project to create spark jobs to run. Apache Spark is about 100x faster than Hadoop and easier to write the jobs.  When working with Big Data this is the best path forward and has full .NET support and abilities.
  • Shrink-wrap – Package to control and lock dependencies for a node project.
  • Paket – Nuget management tool of dependencies, great improvement to Nuget system
  • Kubernetes – Distributed system to maintain and run your code and services in an instant scalable and manageable way.  Management of systems can be automated along operations rules that are written and code and the system is guaranteed to work and perform under stress and under failures.  It will recover and operate and adapt as need.  Too many users on 1 node, automatically clones node and diverts traffic.  Failure on a node moves traffic to redundant nodes.  This is a Google open source project they use it to scale themselves.  Write the code and the system will run anywhere and in any form.  No need to be available for support the system adapts and runs always.  Incredibly powerful and all the details are in the system.  My prediction is that this is the future of all application development everywhere as it runs anywhere cloud, hybrid, and local.
  • Xamarin Test Recorder – Nunit Test + SpecFlow test + a UI test recorder macro allows a developer to write tests exactly as a human user would use the system entirely.  Complete in every respect and able to run in an automated way and against the Xamarin Test Cloud of thousands of physical devices instantly.
  • JavaScript source maps. - Generating source maps for JavaScript allows debuggers to attach and stop at precise generated code point from variety of languages.  Saw this with F# generating JavaScript which then generated the source mapping so a browser click stopped the F# code in debug mode as it was directed by the sourcemap.
  • Logary – Swap and enable different logging systems at runtime/design time. Provides common interface and handles different loggers
  • NBench – Project from the creators of Akka.NET to provide consistent and relevant performance testing in the common format of unit tests.  If you can write unit tests then you can write performance tests.
  • Hadoop.NET – Port of Hadoop to .NET, initial stage not complete or working. May be available end of year
  • Hypermedia abused/obsolete – Current web applications have perverted and destroyed the original intent of Hypermedia and Hypertext as linked documents.  Websites wish to control the experience and data as the single page application (Facebook, Amazon, Google Applications) instead of linking to source and other documents.  The elevation of JavaScript and nodejs have reinforced this approach.  Tim Berners-Lee never intended for this to be the World Wide Web remaking desktop applications within a box.  That box being the browser and concentrating your data in the hands of a few large corporations.  As JavaScript has won the battle for preeminent language and format of the current web JSON-LD or json linked data attempts to bring back the linked data intent of the web in the current most popular language.