Date and time of this article

Using the CentOS package manager to updgrade the .net SDK

The command on the microsoft website for updating the .Net core SDK is apt-get install ´dotnet core package'.

Since CentOS isn't using the apt-get package but Yum we can simply change this b

Continue reading


tags:

Disable firewall on CentOs

Date and time of this article

Disable firewall on CentOs

During developing or testing with Visual Code on a remote linux system, it's handy to temporary turnoff the firewall!

SystemCtl stop FirewallD

Read more


Ip address on a CentOs system

Date and time of this article

Get the ip-address on a CentOs system

Because getting the ip address on every linux variant differs: getting the ip-address on a centos:

ip addr

Read more


Linux

Date and time of this article


Several linux notes from a .Net developer perspective.

Read more


Bounded context

Date and time of this article

Bounded context

The bounded contect defines the logical boundary between attention areas in a Domain Driven Design solution.

 

Read more


Ubiquiqous language

Date and time of this article

Ubiquitous language

Ubiquitous language is the term Eric Evans uses in the Domain Driven Design as the language between developers and business. With this language the Business Models are build for the domain driven design solution.
With this language the understanding of the developers for whatś going on in the business is growing. It's makes it easier for developers to talk to the businness and have a common understanding.

Both, as well the business as the developer must have to agree to the terminology used by the various members of the teams. 

Read more


Domain Driven Design

Date and time of this article

Domain Driven Design

The domain driven design has itś own community on www.dddcommutity.org.

 

Read more


Docker volume

Date and time of this article


Using docker volume to place the log file on a central location

If you run your application as a MicroService in a Docker file you need to think about the place of the log file. To place the log file in the docker file itself is not a good idea, the docker file, which is normally not that big, is growing dramatically and next to that this file is copied everywhere including the log file. So: not good!. An option is using ElasticSearch for pushing the logfiles to a central location. Another option is using Docker Volumes to place the logfiles on a central location of the Docker Host.

Docker Volume

According to : Docker Volumes is the preferred mechanism for persisting data generated by or used by a Docker containers. A docker volume mounts a directory in your container to a directory on the Docker Host. Volumes are managed by the Docker CLI and are supported on both Windows and Linux.

 

Example

docker run -d --name mygeopapi -p 80:80 -v "=/var/log:/app/log" --privileged=true mygeoapi:latest

-d run as deamon

--name name off the docker container

-p ports available on the outside off the container

-v volume, the first part is the directory on the server the second part the directory in the docker image

--privileged=true needed for write access to the directory

:latest the tag of the image on which the container is build, latest takes the lates available image.

Container Directory

Maybe a litle obvous but how do you create a container in your Docker Image. This is done by adding a Directory in your solution. Normally you use this to organise your solution and place your code files in it and a namespace is created. So in this example you create a directory 'log' in your solution wich can be empty and by that a directory 'log' is created in your Docker Container wich can be mapped to a local log directory, normally /var/log.

Be aware!!!

To save you some time, like a wasted some. If you run your container on Linux: LINUX IS CASE SENSITIVE.

Read more


ElasticSearch

Date and time of this article


Logging in DotNetCore with SeriLog and ElasticSearch

Logging is extremely important in business solutions. Without logging we don't know what's happing in the system. Therefor all programmers implement one of the popular logging frameworks and write their logging statements. In most of the cases these log statements ends up in a log file which resides in the application directory or a central location somewhere.

Errors

In case of errors in the system the support team has to find the location of the log files and check if there are errors in it. Most of the time the logging level has to be adjusted in order to get a right diagnosis.

Elastic search

The case that a support engineer has to find out the location of the logfiles and check if there are any errors can be avoided with ElasticSearch. Just write all you logfiles to ElasticSearch. Like this all your logs are in one centralized locaton. Second advantage is the big index of all Logs wich can be filtered on date and errors by component etc. Like this the errors will apear early on a dashboard and actions be be taken in an eary stage.

How

For this example SeriLog is used. Serilog configuration takes just one parameter: the url on which the Elastic Server is found.

            Log.Logger = new LoggerConfiguration()
                .MinimumLevel.ControlledBy(GeoCodeApiConfig.LevelSwitch)
                .ReadFrom.Configuration(config)
                .Enrich.WithProperty("IP address", GetLocalIPAddress())
                .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
                {
                    AutoRegisterTemplate = true
                })
                .WriteTo.File(logfile)
                .CreateLogger();

This is enough to place your log in Elastic Search and read your logs with, for example, Kibana. 

Read more


Typescript containerized

Date and time of this article

Typescript containerized

A typescript project is an ideal project for containerizing. It's normally a project with no direct references to other projects ( other than using rest resources ). An containerized application should contain everything to run the application. In case of typescript the only service which is necessarly is an HTTP server. There are multiple choises available for use in an container but a nice one, one you should always consider using, is nginx .

Using nginx in a containerized application is not complicated but there are some things which helps you with a quickstart.

Pull nginx from the internet

Start using nginx in a seperate container, just pull the latest available nginx container from the internet ( assuming you have docker allready installed ):

docker pull nginx

Create a DockerFile

The next thing is to create an DockerFile in your project. This DockerFile is a set of commands which docker build uses to create the images. Later on this DockerFile is allso used with the DockerCompose command and file. An standard DockerFile is looking like this:

FROM nginx:alpine

COPY docker/nginx.conf /etc/nginx/nginx.conf
COPY dist/DockerTestApp /usr/share/nginx/html

WORKDIR /usr/share/nginx/html

This from tells the build that with this image the image with the name: nginx:alpine is used. Next to that there two copy commands: to copy the nginx config and to copy the content of your dist folder ( with is the result of the ng build ).

Create the nginx config file

It''s nice to copy you own nginx.config file into the nginx container. In this way you can set some custom paramereters for you specific container. The standard nginx config file is looking like this:

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    server {
        listen 80;
        server_name  localhost;

        root   /usr/share/nginx/html;
        index  index.html index.htm;
        include /etc/nginx/mime.types;

        gzip on;
        gzip_min_length 1000;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        location / {
            try_files $uri $uri/ /index.html;
        }
    }
}

Build an image

So with this in place you can create the image by running the build:

docker buld - t imagename:tagname

You can build with the same imagename but a different tagname which can be a version number. It's a tag indicating the version of the image. After the build check if the image is created with the command:

docker image ls

Run the container

Now we have an image we can run the application in an container. Therefor we execute the docker run command. With this you nave two options. Just running the container. This serves up you content which your build has copied to the right path in the nginx container. This is statis and great for checking the build but if you are debugging and or building the application it's not that great, therefor ou can run the container and map an volume to http directory from nginx. This will serves up just the content of our dist folder. Meaning you just have to execute a ng build to check the output directoy in the container. Both commands:

## run
docker run -p 8080:80 test/inginx:1.0

## run wiht volume
docker run --rm -it -v ${PWD}/dist/dockertestapp:/usr/share/nginx/html -p 80:80 test/teustest:1.0

The ${PWD} is a pointer for docker on Windows to the local directory, on linux or mac the pointer has a slightly different syntax, check the documents for this.

Push it to the server

After running the container locally the container must be pushed to a server. We expect that all depenencies are in the container and the container is acting exactly as it does on th e local-workstation. For testing I used a linux workstation with CentOs and docker installed. The image is goin to be pushed to GitHub to a private repository and on the linux machine pulled from this repository. Pushing an container to GitHub has something strange in it. You have to tag to image to the name of your login and repository in Github. So my login in Github is teusvanarkel and the repository is test. I had to tag the image: teusvanarkel\test before I was able to push it to Github: command

Tag it

docker tag teustest teusvanarkel/test

Push it to Github

docker push teusvanarkel/test

Pull the container from the server

If you login on Github you will see in the repository and container with the tag you just created. Using the this image on linux machine is not more than pulling it from Github and start it

docker pull teusvanarkel/test
docker run teusvanarkel/test

And see you have a webapplication running on a webserver without ever needed to install and or configure a webserver. Very Nice!

Read more


Typescript

Date and time of this article

Typescript is a typed superset of javascript that compiles to plain javascript

GitHub

Bugs and projects on Github Tell us on GitHub.

Stack Overflow

TypeScript on stackoverflow Stack Overflow using the tag typescript.

Blog

Learn about the latest TypeScript developments via the Microsoft Blogs blog!

Twitter

Stay up to date. TypeScript on Twitter@typescriptlang!

Definitely Typed

Browse the numerous TypeScript definition files available for common libraries and frameworks.

Friends of TypeScript

Learn about who's already improving their workflows with TypeScript!

Read more


Unit test builder pattern

Date and time of this article

Unit test builder pattern

In the arrange section of the unit test an object, which is the subject of the unit test, is created. In most of the times this object has dependencies and a constructor with multiple parameters. This object is mostly created in multiple unit-tests ( the integration tests, system tests etc). A change to this object is causing a number of changes in the unit-tests. That is in most of the cases not convenient. Therefor it's handy to use the fluent builder pattern.

This is just the builder pattern as described on WikiPedia combined with fluent interface pattern allso on Wikipedia.

Example: the invoiceGeneratorBuilder object wich is returning an InvoiceGenerator object.

    public class InvoiceGeneratorBuilder
    {
        // privates
        private IStarlimsDataAccess starlimsDataAccess;
        private JobTypes jobType = JobTypes.Ubl;

        public InvoiceGeneratorBuilder WithDataAccess(IStarlimsDataAccess starlimsdataAccess)
        {
            starlimsDataAccess = starlimsdataAccess;
            return this;
        }

        public InvoiceGeneratorBuilder WithJobType(JobTypes jobtype)
        {
            jobType = jobtype;
            return this;
        }

        public InvoiceGenerator Build()
        {
            return new InvoiceGenerator(starlimsDataAccess, jobType);
        }
    }

And the change if the InvoiceGenerator object constructor is extended with a log object, which happens in real. The builder object has now a log method for passing in the log object, if desired, otherwise it uses a default. This is just a litle example of how the fluent builder pattern is preventing a log of changes in the unit tests.

    public class InvoiceGeneratorBuilder
    {
        // privates
        private IStarlimsDataAccess starlimsDataAccess;
        private JobTypes jobType = JobTypes.Ubl;
        private ILog logger = LogManager.GetLogger("unittest");

        public InvoiceGeneratorBuilder WithDataAccess(IStarlimsDataAccess starlimsdataAccess)
        {
            starlimsDataAccess = starlimsdataAccess;
            return this;
        }

        public InvoiceGeneratorBuilder WithJobType(JobTypes jobtype)
        {
            jobType = jobtype;
            return this;
        }

        public InvoiceGeneratorBuilder WithLogger(ILog log)
        {
            logger = log;
            return this;
        }

        public InvoiceGenerator Build()
        {
            return new InvoiceGenerator(starlimsDataAccess, jobType, logger);
        }
    }

Read more


Docker

Date and time of this article

SQL server for Linux in Docker

Docker settings

After installing Docker change the memory settings to a value above 3200mb. This is one of the prequisites for installing SQL server for Linux.

Docker memory settings

Kitematic 

Use kitematic to find and install the SQL-server for linux docker image. You will find Kitematic in the menu when u rigtclick on the docker icon in the icon tray:

Docker and Kitematic

In Kitematic search for the SQL-server for Linux image. Make sure you use the image provided by Microsoft. After downloading the image add you will need to add some environment variables. One of them is the SA password. Without this you cannot build a connection. There are two options to specify the environment variables and one of them is simply add them to Kitematic:

Kitematic environment variables

Of course it's allso possible to start the image directly with the necessary environment variables from powershell:

docker run --memory 4096m -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong!Passw0rd>" -p 1433:1433 -d microsoft/mssql-server-linux

Create database in the docker image

After starting the docker image you can connect to SQL-server by specify the ip-addres of the host running docker. In the environment variables you have specified that port 1433 is passed throug docker so just specify the host is enough to connect. Before you can do some test you need to have a database. There are several ways to create a database but one of them is to use the sqlcmd tool wich is included in the latest SQL server for Linux version. Because that is included and nothing needs to be installed I using the sqlcmd tool.

First start a powershell with docker support. In this powershell you can execute the sqlcmd but you will notice that default the linux container cannot find the sqlcmd tool. Therefor you can use this litle hack; add the sqlcmd path to the environment variables for Bash shell.

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc

Now you can start the sqlcommand tool for creating the database:

:/opt/mssql-tools/bin# sqlcmd -s 192.168.2.6 -U SA -P 'yoursapassword'

Now you will get the sqlcommand tool and can create a database

use master
go
create database LinuxTestDatabase
go

Finally

Now you have a Linux container running with SQL server for Linux. There is no problem with connecting over the het network to this server and database from within VisualStudio 2017. But when I try to add some tables from within Visual Studio 2017 there is a incompatible error:VS2017 incompatible SQL Server version ( SQL server for Linux )

That's to bad now I still have to install SQL Server Manager in order to try to connect to the server and manually create some tables. Something I did try to prevent by using the sqlcmd tools.

Read more


Exception filter

Date and time of this article

Exception filters in C#

Excpetions filters is new in C# 6.0. With an exception filter it's now possible to place a filter on the catch statement. This filter has a condition for enter this catch method. If this condition is met the catch body is entered otherwise the catch is passed and keeping the full error trace. Keeping the full error trace can be important and helpful with debugging.

      public class MyProgram
    {
        public void TestFilterMethod()
        {
            try
            {
                DoSomeThingWrong(true);
            }
            catch (MyException ex) when (ex.Code == 42)
            {
                Console.WriteLine("Error 42 occurred");
            }
        }

        private void DoSomeThingWrong(bool wrong)
        {
            if(wrong)
            {
                throw new MyException() { Code = 42 };
            }
        }
    }

    public class MyException: Exception
    {
        public int Code { get; set; }
    }

So in this example only the exception with code 42 is handled the rest is catched at a higher level.

Read more


Expression body

Date and time of this article

Methods which body consist of an expression

Sometimes a method exist of just one statement. In C# 6.0 these methods with just one statement can be replace with an expression.

public string FullName => $"{FirstName} {LastName}";

As you can see the expression use the new Interpolated string functionality.

Read more


Cleaning up the unittest database: performance

Date and time of this article

Cleaning the unittest database

            using (var logic = new LogicBase())
            {
                var truncateScript = @"EXEC sp_msforeachtable ""ALTER TABLE ? NOCHECK CONSTRAINT all"" " +
                                     @"EXEC sp_MSForEachTable ""DELETE FROM ? "" " +
                                     @"EXEC sp_msforeachtable ""ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"" ";

                 logic.db.Database.ExecuteSqlCommand(truncateScript);
            }

Read more


UnitTest

Date and time of this article


Read more


Working with GIT and .Net projects

Date and time of this article

How to work with GIT and .NET multiple project solutions

GIT version control and .NET solutions

Independ of the company size often some functionality is stored in a library which is shared between multiple solutions. There may be, for instance, multiple website solutions which talks to the same database. In this case there is a data library ( C# project ) with functionality to store, query and update the database wich is shared between the different solutions. This is a logical structure found everywhere in all sorts of C# development environments. Looking at the some Version Control systems like, subversion or TFC version control this is easily supported. GIT, on the other hand, is a complete different story. GIT is folder based. With this is mind there are two options to store these kind of solutions in the version control.  

  • Store your complete folder structure with all solutions in a single repository.
  • Store every .NET solution in a single repository and work with GIT submodule's.

Storing you compleet development environment as a single repository into GIT is the most simple solution. On the other hand creating a repository for every .NET solution feels better. In that case you will have to work with GIT submodules. GIT submodules is supported by VisualStudio but still forces you to use the GIT command:

Create a GIT submodule

- From the parenbt project open Git Command prompt en voer de volgende commando's uit

  • Git submodule add "hier de url uit TFS van de submodule"
  • Git submodule init
  • Git submodule update
Bind the submodule to a branch

Be aware: the submodule is not yout bounded to a remote branch. We first have to bind th e submodule to a remote branch by connecting via the parent project en from the Git command promt to execute the checkout command: Git Checkout Master. After this check with Git command: check status the binding of the submodule.

How to commit a GIT submodule

Standard:

  • Add all changes by executing : Git Add .
  • Commit all changes locally by executing: Git commit -a -m "hier commit message"
  • Push all changes to server by exectuing: Git push

After that you can now push all your changes with one command:

  • - git push --recurse-submodules=on-demand
GIT submodule and branches

Be aware that when you are working in a specific branch in a parent project you will have to manually create and changes the branch of the subModule.

Git push --set-updstream origin 'nieuwe branchname'

Conclusion of GIT submodules

Not sure yet..... :-)

Read more


Xml serialize util

Date and time of this article


Read more


Log

Date and time of this article


Read more


Ado database example

Date and time of this article


Read more


Datamapper

Date and time of this article


Read more


Date

Date and time of this article

Javascript date

Read more


Debugging windows service

Date and time of this article

Trick for debugging windows service

Debugging a windows service can be slow and cumbersome. You will have to install the service, start the service through the services control manager, attaching to the thread etc.

One nice thrick to debug a windows service is to adjust the program main method with the following code, it will detect if you start the service manually and present a console direct away:

/// 
        /// The main entry point for the application.
        /// 
        static void Main(string[] args)
        {
            Environment.CurrentDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
            var servicesToRun = new ServiceBase[] { new AdControllerService() };

            if (Environment.UserInteractive)
            {
                // Start for debugging
                var type = typeof(ServiceBase);
                const BindingFlags flags = BindingFlags.Instance | BindingFlags.NonPublic;
                var method = type.GetMethod("OnStart", flags);
                foreach (var service in servicesToRun)
                {
                    method.Invoke(service, new object[] { args });
                }

                Console.WriteLine("Service is started. Press any key to exit");
                Console.ReadKey();

                var stopMethod = type.GetMethod("OnStop", flags);
                foreach (var service in servicesToRun)
                {
                    stopMethod.Invoke(service, new object[] { });
                }
            }
            else
            {
                // Startup as service. 
                ServiceBase.Run(servicesToRun);
            }        
        }

You will have to change the output type of your project from Class Library to Console Application

Read more


MessageQueue

Date and time of this article

Microsoft message queue for distributed applications

Read more