DotNetFringe 2017 - Recap

Point 1: "The Company" Technology systems and use is drastically behind and isolated

Why is this?  Fundamentally I think it is because we rely on active memories of project members to backfill things that other companies would do normally need to do because they are forced to when confronted with code that no one wrote or fully understands.  This reliance on knowledge is helpful in cases, but is hurting us in adopting new practices/technologies.

People used to practice waterfall deliveries of applications because it was how people thought and devised solutions.  It wasn’t great and that is why companies that practiced it were overwhelmed and outcompeted when they would not institutionally change.

If everyone involved in a project for "The Company" was completely new to a project and no one had been coding at "The Company" for more than two years practices would need to be different.  Questions would be asked about the entire system and answers sought to issues that had been assumed is how things work.

The new developers would rewrite the application and they would use ideas and practices they picked up at other companies.  This would mean they would write it in new languages, use tests to verify, deploy to different platforms, etc.  Other companies do not have to specify time limits or expirations on products because they simply rewrite when requested.

This is type of turnover and change in software developers is normal for a stable software company, for an unstable company that would be a few months max for the longest serving developer.  Software developers for the most part do not refine an application they move on or rewrite with new technology/information available and other companies have experienced this and it forces them to adopt practices that work best to cycle knowledge throughout the organization while still delivering robust working code.  This practice is growing with open source and the increase in total software developers overall.

Open source projects are geared towards the problem at hand and drop support for legacy systems as soon as they possible can as it stretches their limited resources to do support on older systems and as more and more companies adopt more and more open source software (they need to keep competitive and continue to keep costs down) they are placing their architecture as forward oriented with their libraries and dependencies which are exponentially growing.

This practice happens in other industries as well.  In civil engineering old bridges made of wood are for the most part replaced with new steel bridges or concrete.  Fixing the old bridge may be possible, but it will never be as strong or as well suited to the modern time if it is not replaced it will cost more to fix it up to specification to support say a semi-truck and require more maintenance of specialists that need more time to examine, etc.  The same could be said of the majority of suburban houses.  It is easier to tear down and rebuild than to rewire/reinsulate/etc.

Software code rots significantly faster as it is fundamentals are changing at a faster rate and over a larger space.  Nothing is stable or permanent in any sense of the outside world.  JavaScript is the best example of this as frameworks come and go with as the language and demands change, lasting more than a year as an in use framework is closest thing to permanent for a JavaScript framework.

The only up to date technology we are using that is modern is React/Redux, but we are using it in a nonstandard manner (everyone uses Node and its helpers Webpack, Node Debugger/Node Inspector, automated tests on the DOM Jest, Jasmine, etc.).  We are doing React/Redux the hard way and attendees thought what we are doing is error prone and unmanageable at best other comments were harsher.

My survey

  • Cloud: We are in the cloud.  You are not in the cloud??  How does that work, you do things manually?!  You know that is error prone and incredibly hard to manage.
  • Tests: You must do tests, concept of no tests not even entertained as mentally possible.  I am serious, that was not an overstatement.  Automated testing produces artifacts from a process that is reliable and allows for verification.
  • Automation: Automation is a solved problem with many options, the cloud ones are easiest to use.  Automate everything possible and that automation is built on containers or VMs in the cloud.  You should try using the tools available Heroku/CircleCI/TravisCI so you can focus on your application code.
  • Technology: Linux is where you deploy on the cloud so you should be using Linux and be cross platform because then you don't have to worry about strange behaviors.
  • JavaScript: Don't contain it to the browser.  You should be putting it everywhere Node server, Native Apps, Browsers.  It will make it easier on you.
  • JSPM: What is that?  We use Node/Webpack/Browserify.  Why are you using that?  I never heard of it.  Don't do things the hard way.
  • Containers: Since you need to use containers pick one Docker, Kubernetes, Rancher, whatever.  Whichever one you pick expect it to require you to think/debug in a different manner and it is hard to transition.  If you starting out as a new programmer then you don't have to unlearn things, but most people were experienced and had to change their expectations and ideas.  Companies are using AWS/Azure as their cloud providers which are changing every single day/hour/minute we will never catch up in knowledge.

I suggest the following plan:

Start clean as if customer is completely new and there is no previous relationships on any new item and put it in a cloud provider.  Write each part as a microservice, do not agree on anything (takes too long and restricts options) about code use other than that communication will take place on TCP/IP with HTTP/JSON messages for microservices.  This is for the most part how web applications are already written here except the frontend JavaScript code is sent to the backend code within the same application code.  It is getting serialized to JSON and passed as HTTP over TCP/IP right now so separation of operations is already a familiar concept.
Each microservice is run by one or more developers and they write it anyway they want in any language they want or platform.

It will not matter as the application is geared as cloud native options for deployment and maintenance are varied and large.  This approach distributes the risk and allows for significant architecture decisions to be encapsulated.  This has minimal risk as all parts of the system will have unit tests for the particular microservice and integration tests for when they are ready to be combined.  Create a new feature branch and it would then need to have an integration test for combining back into the main branch when it is ready, if not then it can’t be merged back in.

Examples:

  • Ubuntu Nginx reverse proxy to Node web application serves out TurfJS operations on data maps
  • Centos C# .NET Core application processing RabbitMQ messages
  • Golang web API return JSON data from Postgres Database with a Redis cache for performance
  • Crystal/Kemal MVC application interacting with MySql database
  • And others...

Under this architecture any microservice can be deployed and updated without disturbing the application framework.  If it cannot then you must separate the application logic more and keep compatibility for the dependent services.  Do not share data sources, it is ok to have separate data formats, sources, codebases, test systems, etc.

Delaying the move does not make things easier as it means there are more concepts and ideas to incorporate properly and the most valuable knowledge of how to piece things together is not documented anywhere it is built through experience and pain (This was a common point of every single attendee I talked to, you will get to this point and no one can help you.  You must get through it and learn).

This system is composable, fault tolerant, and easily upgradeable when you keep your boundaries strict.  As a side note it allows any particular developer to embrace the level of experimentation and change they feel comfortable with and provides a myriad of experiments that will find unexpected benefits.

Companies that have run through this though can deploy a completely scaleable application in little to no time and as they have built it to run with limited scope containerized or not it means if the business logic is what they can focus on, delivering features without interruption.

Whereas we would I think have to rewrite the application every single time it reached a performance bottleneck.  Hardware is cheap if you make your application horizontally scaleable, if you do not then it is exorbitantly priced to make vertical scaling and still may not fix the problem if you are network bound on performance which most applications get to when the functional operations are tuned up.

Point 2: Containers are significant and increasing

They operate in a manner that requires entire organizational change to harness them.  Containers are not easy (http://containerjournal.com/2017/06/09/docker-containers-hard-just-like-great-technologies/), but they give you power and reduce costs.  They are not going away.  It is not a fad and the biggest reason is that cloud providers need to use containers to get efficient use of their resources which are then used by their customers.  By offering the same abilities and continuously improving container technology that they use providers and customers are moving far ahead into automation that is completely different than what traditionally deployed applications worked like.

As the majority of containers are running in a cloud system they allow for abstraction of the contained code i.e. you are interested in the input/outputs the internals of the containerized application can be changed independently platform/language or even load balanced with mirrors without brittle control systems.


Point 3: Big Data is not a focus – Replaced with Machine Learning

No developers were working in Big Data, they were working with Machine Learning on several things.  They did use large datasets that would be considered big data (however the use of containers, horizontal scaling, and microservices means any amount of data fits in memory if you want), but the concept itself is too fraught with complexities and geared toward reflection instead of action that the datasets are out of date before action can be taken.  Machine Learning, Observables, and Streams are the focus of Cloud providers and as every person at conference developed for Cloud systems they could easily convert their large datasets to streams or have some Machine Learning algorithms independently identify and inform.  Second part of this is Big Data is an area that contractors sell service that cost so much to run that doing Machine Learning is factors of magnitude cheaper, faster, more reliable, and easier to debug.

Example I can think of is a significant amount of user phones are experiencing intermittent outages in its network.  A Big Data analyst would need to take some data and run it through a process and get you a result say in hours to days after it occurred.  A machine learning algorithm trained to recognize intermittent service would be able to identify the problem beforehand and route the user to a more reliable service or if not trained it could be set to alert that a significant new model is being produced by the current network data immediately informing the system administrators in real time to issues.  Big Data by presenters and attendees was treated as a stopgap method mechanism that has passed.  Google hasn't used MapReduce for years http://www.datacenterknowledge.com/archives/2014/06/25/google-dumps-mapreduce-favor-new-hyper-scale-analytics-system/  and it was the company that started this category and as such they have gone through its start -> usefulness -> end -> new better thing.


Point 4: Windows Servers

No mention and no use.  Nothing even brought up to talk about.  As everyone is in the cloud and you should be using resources efficiently Windows Servers do not fit and are available, but not used unless you must i.e. legacy technologies that are being replaced right now.  I stand by my last year prediction that Windows Server 2016 is the last Windows Server unless Windows Server becomes a Linux variant.


Point 5: Open Source Developers

Help them maintain projects if you do not then that project is always one step away from failure.  In our case the easiest one is JSPM, but we are not team members to this project so our influence is limited and project appears to be dying, recommend move to Webpack.  Companies that maintain projects are favorites.  Sponsor a project that is used, or have a Labs (it must be actively maintained) section of your Company presence on GitHub that shows open source projects created and maintained by current employees.


Point 6: Rehashing my points from last year that were again emphasized over and over by presenters/attendees

  • GitHub is synonymous with Open Source and Technology
  • Open Source projects are the default case for organizations
  • Desktop applications are out. Organizations that wish to continue with Desktop applications will be doing all the work themselves (However creating JavaScript translators to Desktop Apps have filled this area - Electron project, React Native)
  • Isolate your applications
  • Testing and deployment can be simpler