Containerised Asp.Net Core WebApi With Docker On Mac.

Containerised Asp.Net Core WebApi With Docker On Mac.

New .NET Core is the biggest change since the invention of .NET platform. It is fully open-source, components and is supported by Windows, Linux and Mac OSX. In this post I am going to give it a test ride by creating a containerised C# application with the latest .NET CORE.

Docker containers allow teams to build, test, replicate and run software, regardless of where the software is deployed.Docker containers assure teams that software will always act the same no matter where it is – there’s no inconsistency in behavior, which allows for more accurate and reliable testing.

The main advantage to using Docker containers is that they mimic running an application in a virtual machine, minus the hassle and resource neediness of a virtual machine. Available for Windows and Linux systems, Docker containers are lightweight and simple, taking up only a small amount of operating system capacity. They start-up in seconds and require little disk space or memory.

docker installation is available for Mac and windows and can be downloaded from office channel. clickhere

prerequisites:

  1. Install Visual studio for mac
  2. Install Docker for mac

here we will try to build asp.net core webapi and host/run on container with the help of docker.

1. Create New Project: choose asp.net core webapi template from visual studio 2017

D1.png

Now provide project name,solution name and other details like where to save the project.

D2.png

Now our newly created project structure looks as per below

D3.png

Here we have very simple case where service only returns some information about employees as our main objective to host this tiny application on container.

2.Add Docker Support To application

Now add docker file in project  and write instructions that how the docker image build from base image of asp.net core image.

 

Below are the instructions issue to daemon to create docker image.

 

3. Open Terminal on mac :

search for “Terminal” on mac machine and open new window.

D6.png

Once we click on new window option then command window will appear and now all set to issue/write docker command to create docker image.

D7.png

4. Navigate to application folder by issuing change directory command “CD” and make sure we are inside the application folder .We can verify that all items listed by issuing “LS” command that means we are in right place.

D8.png

5. Create Docker Image :

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL . The build process can refer to any of the files in the context

Command: Docker build -t .

here our image name is “firstapiwithdocker”,so command should be

docker build -t firstapiwithdocker .

at the moment we can see daemon accept the command start creating the docker image from the docker file instructions.

D9.png

at last we can see  image has been successfully created and tagged with “Latest” keyword.if we don’t provided any tag than daemon tagged the image with “Latest” keyword.

D10.png

5. List all Docker Images:

Now we have to verify that require image has been created or not,so below command have to issue list down all the images.we can see all the important information about images like image name,tag,imageid,created date and size. in below image we can see that out newly created image listed with other base images.

Command :  Docker Images

D11.png

6.Run image with in container:

Till now we have successfully created docker image for our webapi solution and contain all the necessary files and now we have to create container to run this image.

below command will create container.

Syntax: docker run -d -p : –name

example: docker run -d -p 9000:80 –name FirstContainer firstapiwithdocker

once we execute above command,a new container has been created and get random number that means container has been created successfully.

D12.png

Now List down all containers and we can see all the important metadata about containers like containerId,ImageName, Command,Created date,container status,Ports and container name.

here our newly created container is running and exposing 9000 from the host to 80 on the container.

D13.png

Let’s hit the url “http://localhost:9000/api/values” on browser or postman to verify that our application is running on container or not.

Below are the result of the webapi which is running on container instead of local machine.

D14

7. Push docker image on docker hub:

Docker Cloud uses Docker Hub as its native registry for storing both public and private repositories. Once you push your images to Docker Hub, they are available in Docker Cloud.

we need to create docker hub account to push the image on public/private repositories.

D15

 

Docker image should be tagged with well qualified name before issuing the push command. so below command will tagged the image with name “RakeshMahur/FirstApiWithContainer”.

Syntax: docker tag <ImageName> <TagName>

Example:  docker tag firstapiwithdocker rakeshmahur/webapicore-sample

Now login on docker hub from the terminal window by issuing the “docker login” command and provide the docker hub account details (username/password).

D16.png

once docker hub credentials has been validated successfully then a message comes on window and now we will able to push image to docker hub.

D17

Issue docker push command to push docker image to docker hub.

docker push rakeshmahur/webapicore-sample

once we execute above command then we can see our local docker image push to docker cloud repository and listed done over there and every one can pull this image start working on it.

D19.png

Docker Commands

Below are some important and comman used commands , refer to the docker documentation for more details and a more exhaustive list of flags.

  • docker build -t .
    • Builds an image from a given dockerfile. While still useful when handling individual images, ultimately docker-compose will build your project’s images.
  • docker exec -it
    • Runs a command in a running container. More than anything else, I’ve used exec to run a bash session (docker exec -it /bin/bash).
  • docker image ls
    • Lists images on your machine.
  • docker image prune
    • Removes unused images from your machine. Especially when building new images, I’ve found myself constantly wanting a clean slate. Combining prune with other commands helps clear up the clutter.
  • docker inspect
    • Outputs JSON formatted details about a given container. More than anything else I look for IP address via (docker inspect | grep IPAddress)
  • docker pull
    • Downloads a given image from a remote repository. For development purposes, docker compose will abstract this away, but if you want to run an external tool or run project on a new machine you’ll use pull.
  • docker ps
    • Without any flags, this lists all running containers on your machine. I’m constantly tossing on the ‘-a’ flag to see what containers I have across the board. While you are building a new image you inevitably have containers spawned from it exit prematurely due to some runtime error. You’ll need to do ‘docker ps -a’ to look up the container.
  • docker push
    • Once you have an image ready to be distributed/deployed you’ll use push to release it to either docker hub or a private repository.
  • docker rm
    • Removes an unstarted container from your system. Need to run docker stop first if it is running.
  • docker rmi
    • Removes an image. May need to add on the ‘–force’ flag to force removal if it is in use (provided you know what you are doing).
  • docker run
    • Runs a command in a new container. Learning the various flags for the run command will be extremely useful. The flags I’ve been using heavily are as follows:
      • –rm – Removes the container after you end the process
      • -it – Run the container interactively
      • –entrypoint – Override the default command the image specifies
      • -v – Maps a host volume into the container. For development, this allows us to use the image’s full environment and tools, but provide it our source code instead of production build files.
      • -p – Maps a custom port (ie. 8080:80)
      • –name – Gives the container a human readable name which eases troubleshooting
      • –no-cache – Forces docker to reevaluate each step when it runs the container, as opposed to using caching.
  • docker version
    • Outputs both the client vs. server versions of docker being run. This isn’t the same as ‘-v’.
  • docker volume ls
    • While there are variants on volumes, so far I mostly use the ‘ls’ command to list current volumes for troubleshooting. I’m sure there will more to come with using volumes.

 

 

Advertisements

Securing Sensitive App Settings Using Azure Key Vault.

DownLoad Complete Project: WebApiWithAzureKeyVault

Why Azure Key Vault?

Almost every Azure app has some kind of cryptographic key, storage account key, sensitive setting, password, or connection string.

For example, consider a web app that requires a connection string to an Azure SQL Database.Storing this sensitive information in an App.config file could result in it being checked in to a source-code control system and unintentionally exposed to many developers.

Compare this to using Azure Key Vault, where the App.config file only contains a reference to this sensitive data, and is controlled by the access policy of Azure Key Vault.

Below is insecure way which is commonly used in azure based solutions:

A1

you can see here all secret information is clearly mentioned in webconfig.cs file in plain text from and think if some one got access on server and stolen all sensitive information easily. Usually these configuration files also checked in on repository systems like TFS,GitHub etc along with other project files.Any team who have access to these repositories can also see these secret information.

By using Key Vault you can securely store data and avoid having these sensitive pieces of information stored in source code which may then be compromised.

The Microsoft Azure cloud platform provides a secure secrets management service, Azure Key Vault, to store sensitive information. It is a multi-tenant service for developers to store and use sensitive data for their application in Azure.

The Azure Key Vault service can store three types of items: secrets, keys, and certificates.

  • Secrets are any sequence of bytes under 10KB like connection strings, account keys, or the passwords for PFX (private key files). An authorized application can retrieve a secret for use in its operation.
  • Keys involve cryptographic material imported into Key Vault, or generated when a service requests the Key Vault to do so. An authorized cloud service can request the Key Vault perform one or more cryptographic operations with a key on its behalf.
  • An Azure Key Vault certificate is simply a managed X.509 certificate. What’s different is Azure Key Vault offers life-cycle management capabilities. Like Azure Keys, a service can request Azure Key Vault to create a certificate. When Azure Key Vault creates the certificate, it creates a related private key and password. The password is stored as an Azure Secret while the private key is stored as an Azure Key. Expired certificates can roll over with notifications before these operations happen.

Application flow with key vault

A30

Steps Required:

  1. Create A Key Vault
  2. Create a Secret
  3. Register an App in Azure Active Directory
  4. Create an API Key for the App
  5. Give App-Specific Permissions to Access Key Vault
  6. Configure your Dot Net Application

1. Create a key vault

Login on azure portal  and add new service “key vault”. If ‘Key vaults’ is not already in your list, click on ‘More services’ and use the filter to find it. Select ‘Key vaults’.Fill all the mandatory information and press create button.

A2.JPG

A3

2. Create Secrets:

To do this, click on ‘Secrets’ under ‘Settings’ on the left, or under ‘Assets’ in the Overview panel. Once the ‘Secrets’ panel opens up, click the ‘Add’ button at the top so you can create a new one.

Activation and expiry dates can be used if you only want the secret to be accessed for a specific period of time. When you are finished, click ‘Create’.

A4

Key vault DNS name will be used as Key Vault url in application from where key request will initiate.

A15.jpg

3.Register An App In Azure Active Directory

Now You have data protected by Key Vault and You need to give our application (secure) access to this data, first.

Again go to azure portal and search “Azure Active Directory”. inside AD, select ‘App registrations’ from under the ‘Manage’ panel on the left. This is where You will configure the access and permissions.

our application will have when accessing Key Vault programmatically.

A5

In my case i have already created a webapi app named as  “DubaiProperties-Api” which is running under azure app service and i have register the same application in azure active directory to read secure keys/secrets from key vault by this application.

A7

4. Create An Api Key For Registered App

From the ‘App registrations’ menu, you should see your newly created app listed.

A8.JPG

Click on registered application and  Copy the ‘Application ID’ that you should be able to see under ‘Essentials’.

A9.JPG

select ‘Keys’ from the ‘API Access’ section on the right.Give the Key a meaningful description that will explain its purpose, then set an expiration setting. Click ‘Save’ and your API Key ‘Value’ will be presented to you. Copy this key value now as when you navigate away it will never be presented again.

You can always create a new one, if you forgot to copy it.

A10

A11

5.Give App-Specific Permissions to Access Key Vault

Return to key vault and select ‘Access Policies’ under the ‘Settings’ panel on the right. Click the ‘Add New’ button. Click the ‘Select Principal’ option to be presented with a new blade. Enter the Application ID of the app in Azure AD into this field, and select the app when it is presented to you. Click the ‘Select’ button at the bottom to confirm. You can now configure the permissions that you wish to grant the application.

A13.jpg

Only assign the necessary permissions. As it is only Secrets that your app needs access to (and read-only access at that), I would suggest picking ‘Get’ and ‘List’ under the ‘Secret permissions’ option. This is all you need to do, so click ‘OK’ to complete this step.

A14

Now key vault configurations are ready to store secret keys  and refer by  any application.

6.Configure your Dot Net Application

Now all key vault and active directory administration task has completed and now you need to set up dotnet application to use key vault for consuming secret keys instead of define those keys in app.config or some where in application.

Let’s start with Webapi project that needs to use some secrets that is stored in key vault.

Initially there are few things that are required to configure in you application like application Id,Key Vault Url and App Registration keys. All these information already described above at the time of application registration in AD and key Vault Creation.

Below are the settings for webconfig.cs:

A16.jpg

Now add some nuget packages for azure key vault  to the application

A17.jpg

Create a helper class to interact with azure key vault by using Azure SDK and fetch all required secret keys and use in application.

A18.jpg

Now Use this helper class in our webapi controller.

A19.jpg

Now Publish webapi project on azure app service. webapi application should work after deployment.

A21.jpg

Now Check complete swagger url for deployed api and see all api’s controller with all http verbs.

A22

expend Get method of keyvault controller and try to make request to read key vault secret keys value from azure key vault.

A23.jpg

Below is response of webapi with key vault values.

A24.jpg

if you analysis whole code you will not find any secret keys configured in application configuration files or application settings section of azure app service.Keys value directly comes from key vault that is different location.

If you can add new version of same key in keyvault again  then no need to make any changes in your application and application always pick latest version of  key.

let’s create new version of same key with differ value.Go back to secret keys section under key vault  settings pane,here you will found all defined keys.

A25.jpg

click on key for which you want to create new version with new value. choose “DemoSecretKey” to update the value.Once you click on that,you will found all versions of selected key.Currently single version is created.

you never see values of secret keys,its hide to every one.

A26.jpg

Click on “New Version” and select “manual” from the drop down.enter new secret value for key and save it.

A27.jpg

now you can see new version added with updated value,and previous version also maintain by the Azure key vault.

A28.jpg

Let’s test our webapi and it should read new updated value from the key vault.Important thing this is, i have not make any changes in deployed webapi.

A29.jpg

That’s it! This configuration should enable to you to protect your sensitive information in Key Vault and then provide a Dot Net with secure access to that data

Summary

The Azure Key Vault is an excellent service and a welcome addition to the overall Azure services family. It promotes the secure management of cryptographic keys without the associated overhead, which is an important step to adopting and implementing better security within our applications. In the next article, we’ll see how you can set up a Key Vault for our application and use the .NET SDK to create, manage and retrieve keys.

Web API Documentation With Swagger

DownLoad Complete Project: WebApiDocumentationWithSwagger

“If it is not documented, it doesn’t exist. As long as information is retained in someone’s head, it is vulnerable to loss.”

That is absolutely valid when we talk about APIs, because any small-to-complex API needs to be documented, in order to make it easy to use. This might be an interesting challenge, because you have to find the bridge between the abstract world of computer programming and the way people think and work. Here is where Swagger shows its great utility.

Swagger is a specification for documenting REST API. It specifies the format (URL, method, and representation) to describe REST web services. Swagger is meant to enable the service producer to update the service documentation in real-time so that client and documentation systems are moving at the same pace as the server.

Microsoft also provide its own Api documentation libraries that automatically generates help page content for the web APIs on your site.The help page package is a good start but it is lacking things like discoverability and live interactions. This is where Swagger comes to the rescue.

Adding Swagger to your Web API does not replace ASP.NET Web API help pages. You can have both running side by side, if desired.

Adding swagger to Api Project

To add Swagger to an ASP.NET Web Api, we will install an open source project called Swashbuckle via nuget.

s1

After the package is installed, navigate to App_Start in the Solution Explorer. You’ll notice a new file called SwaggerConfig.cs. This file is where Swagger is enabled and any configuration options should be set here.

s2

Now you just need to set up Swagger by adding below code:

s3

 Start a new debugging session (F5) and navigate to

http://localhost:%5BPORT_NUM%5D/swagger. You should see Swagger UI help pages for your APIs.

s4.PNG

Now you can see,all api methods of web api comes with pretty good documentation and you expand/hide the method definition.

Below is api controller code in which i created two methods that comes in swagger documentation.

s5

if you expand method defination by click on individual methods,then you will find all required api level  meta data  like request,response.

s6

s7.PNG

The good thing about swagger is you can invoke api methods with swagger UI without using any external reset client like DHC,postman etc.There is “Try it now” button on each api method and you can call methods and get response from server.

s8.PNG

Another useful feature of Swagger is to create a json document with the entire documentation of the API endpoints and models.

In order to open the json document, where your documentation is included, access the link on the top of the dashboard.

copy below highlighted url from swagger ui and enter in new browser tab.After that you will get pretty nice json document that contains all meta data about all api methods.

s9

Below image shows json documentation.

s10.PNG

 

you have an API which is documented and offers a nice experience to developers. You should keep in mind that this process of documenting APIs should start at the very beginning of the development process, for it to be easy to maintain.

 

 

 

WebApi Exception: Multiple Action were found that match the request.

Usually webapi controller contains GET,GET(id),Post,Put,Patch & Delete methods but sometimes we need to create multiple get or post method or more custom methods to support http verbs.

Let say we have existing Get() method and now we want to add one more custom method names as “GetALL()” to support http Get verb.My Api Controller code looks like:

c2

When you defined your new method with http Get verb along with existing Get() method  and run webapi than below error comes:

C1

WebApiConfig.cs for above code which is created by default when new api project created.

C3

So talk about why this error comes if every thing is perfect in code.So look at the defined route in config file and .In webapi routing only controller name is mentioned in route template and there is no action like (Get,Post or any Custom Action Name) are defined.

Here is the difference in mvc routing and Webapi routing. In mvc routing action name are by default included in Url’s while in webapi actions names are not mandatory.

MVC Route: url: “{controller}/{action}/{id}”

WebApi Route: routeTemplate: “api/{controller}/{id}”

So when ever any request comes to webapi,it always goes to default http verbs and if default GET or Post methods used then it returns a response to the client.

But when we have defined some custom methods along with default Api methods than same request will thrown an exception because now there are multiple action methods that supports http verbs  and server not able to identify which method have to execute.

Why this happened because we have not defined any specific action name in webapi Route.

So what is the solution of this problem as we need many custom action names along with default http verbs in our webapi solution to solve the day-to-day business needs.So question comes in mind whether custom method names are allowed in webapi or not.

Then answer is “yes”,off-course we can add custom action names as much as we want but some changes have to make in webapi routing to support custom action names.

To support custom action method names we have to add {action} with controller name in default route as per below:

routeTemplate: “api/{controller}/{action}/{id}”

Now Complete Webapiconfig.cs after make some changes:

c4

Now Test our methods with these changes.

.1.when request goes to default methods:

C5

2.When request goes to custom action method (GetAll)

C6

 

 

 

 

 

 

WebApi Field Level Response Without Implementing Odata.

Download Complete Project: WebApiFieldLevelSelection

When you are writing a RESTful web API you often want to allow clients to feed a list of fields to the API that the clients need. The reason is to return only the useful data to the client. Say for example, you have an entity called Product that has many properties. The client may need only a few properties of the Product object. If you return the entire object every time the client asks for a product.

it unnecessarily wastes bandwidth and increases the response time. So to avoid that you can accept a list of fields the client wants and return only those. How can you do that?

Odata is best way to achieve this where you can use $Select command to fetch specific database fields in response.

Problem comes when webapi not implementing odata then how can achieve this functionality ?

To achieve this you have to use some basic .net objects like dynamic,expendoObject or  generic collections etc.

Let’s resolve the problem step by step:

  1. Create empty Webapi Project with controller name as “ProductCategory” with Two Get method.one is parameter less and other with string parameter that will accept comma separated field list in request.
  2. Get() method will return all fields of database in response while Get(string fields) method accept list of fields and return desired fields in response.
  3. In below example i have use hardcoded list with dummy values.You may replace it with actual database.

    ProductCategroyController.cs

w1.png

 DynamicObject Method:

DynamicObject accept the list of fields  and return object.here I have use .net reflection to get the value of each fields and respective value to dictionary<string,object> object. later this dictionary object pass to linq query.

w2.PNG

ApiHelper.cs

w3.PNG

OUTPUT:

  1.  When user pass two fields (productid and productName) as query string in request.you can see only two fields are coming in json response.

w4

  1.  When User pass three fields (productId,ProductName,Price) as query string in request.You can see now three fields are coming with json response.

w5.PNG

So you can see how you can implement field level selection on webapi without Odata implementation.