image

In this article we'll look at the main features of SonarQube - a platform for continuous analysis and measurement of code quality, and we'll also discuss advantages of the methods for code quality evaluation based on the SonarQube metrics.

SonarQube is an open source platform, designed for continuous analysis and measurement of code quality. SonarQube provides the following capabilities:

- The support of Java, C, C++, C#, Objective-C, Swift, PHP, JavaScript, Python and other languages.
- It provides reports of code duplication, compliance with the coding standards, unit tests coverage, possible errors in the code, density of comments in the code, technical debt and much more.
- It saves the history of metrics and builds charts of the changes in the metrics over the time.
- It provides a fully automated analysis: integrates with Maven, Ant, Gradle and common continuous integration systems.
- Allows integration with such IDEs as Visual Studio, IntelliJ IDEA and Eclipse using the SonarLint plugin.
- It provides integration with external tool: JIRA, Mantis, LDAP, Fortify and so on.
- You can extend the existing functionality using third-party plugins.
- It implements the SQALE methodology to evaluate the technical debt.

A SonarQube quality model implements the SQALE methodology (Software Quality Assessment based on Lifecycle Expectations) with certain improvements. As it is well known, the SQALE methodology focuses mainly on the complexity of the code maintainability and does not take the project risks into account.

For example, if there is a critical security problem detected in a project, the strict following SQALE methodology requires you to address all the existing reliability issues, changeability, testability and so on and only then go back to the new critical problem. In fact, it's much more important to focus on fixing new bugs, if potential problems have been living in the code for quite a long time and there were no user bug reports.

Taking that into account, SonarQube developers have modified the quality model, based on SQALE to focus on the following important points:

- The quality model should be as simple as possible
- Bugs and vulnerabilities should not get lost among the maintainability issues
- Serious bugs and security vulnerabilities in the project should lead to the fact that the Quality Gate requirements aren't met
- Maintainability issues of the code are important too and cannot be ignored
- The estimation of the remediation cost (using the SQALE analysis model) is important and should be carried out

The standard SonarQube Quality Gate uses the following metric values to assess if the code has passed the checks successfully:

- 0 new bugs
- 0 new vulnerabilities
- technical debt ratio on the new code <= 5%
- the new code coverage is not less than 80%

Sonar team has defined 7 deadly sins of developers that increase the technical debt:

- Bugs and potential bugs
- Violation of coding standards
- Code duplication
- Insufficient unit tests coverage
- Poor distribution of complexity
- Spaghetti design
- Too few or too many comments

The SonarQube platform is designed to help fight these sins.

Let's have a look at the main features of SonarQube in more detail.
Kate Milovidova 16 november 2016, 12:13

image" height="350" width="800

MS officials reveal that the Opensource runtime will get a new APIs, ARM processor backing, along with language upgrades. .Net Core is more than what asp.net developers India are expecting. Officials are planning to release more APIs, F# language upgrade, Linux support, and expanded processor support in .Net Core.

.Net Core is a multi-platform framework and a modular subset of the .net framework programming model. Developers will get benefit from these latest updates. This release will cover several missing APIs in .net core, including serialization, networking, data, etc.

These newly release APIs will be a part of .net standard 2.0 that will be launched simultaneously and results in APIs consistency. The latest APIs will help developers in writing portable code that can be run on critical .net platforms.

F# is a functional first language designed by MS officials. It is to be upgraded as part of .net core plans. It is expected that F# language will be releasing later this year or in the first quarter of 2017. The language F# 4.1 will include complete .net core support and workspace support with a better IDE experience on the F# language service.

The languages will also get code quality and performance enhancements like binary literals, and throw expressions along with developer productivity enhancements that includes local functions. Developers will be able to avail these features in C#7.

Officials have also planned to accommodate ARM 32/64 processors in the .Net Core runtime and libraries coming year, both on Linux and Windows, at different time. Moreover, .Net Core will also support Linux distributions.

Developers will be able to see a minor update in the 2016 end or in the starting of 2017 for .net core. Asp.net core will get WebSockets capabilities along with several enhancements for running on the Azure cloud service, including providers for Key Vault secure key management, startup time enhancements, and providers for service logging.

Experts will preview SingnalR library for bidirectional communications. The initial road map is a 1.0.1 patch release, which is expected in starting of August. This release will boost performance in dotnet build to enhance asp.net core publishing times.

For more updates, asp.net developers India can help you. Ask them about latest release and features of .net Core and get response soon. You can make comments here and directly connect to them. They will answer your questions in comments.
JohnnyMorgan 16 november 2016, 9:31

image

One of the main problems with C++ is having a huge number of constructions whose behavior is undefined, or is just unexpected for a programmer. We often come across them when using our static analyzer on various projects. But, as we all know, the best thing is to detect errors at the compilation stage. Let's see which techniques in modern C++ help writing not only simple and clear code, but make it safer and more reliable.
What is Modern C++?

The term Modern C++ became very popular after the release of C++11. What does it mean? First of all, Modern C++ is a set of patterns and idioms that are designed to eliminate the downsides of good old "C with classes", that so many C++ programmers are used to, especially if they started programming in C. C++11 looks way more concise and understandable, which is very important.
Kate Milovidova 15 september 2016, 11:44

image
Sofia Fateeva 12 september 2016, 12:51

querydsl

Technology: Querydsl is a java based framework which enables the construction of statically typed SQL-like queries. Instead of writing queries as inline strings or externalizing them into XML files they can be constructed via a fluent API like Querydsl.we can use Querydsl in java application for creating all kinds of sql statements.Querydsl has various plugins for JPA,mongoDb, SQL,lucene and also for java collections.

The benefits of using a fluent API in comparison to simple strings are:
    code completion in IDE
    almost none syntactically invalid queries allowed
    domain types and properties can be referenced safely
    adopts better to refactoring changes in domain types


Principles of Query DSL:

Type safety is the core principle of Querydsl. Queries are constructed based on generated query types that reflect the properties of domain types. Also function/method invocations are constructed in a fully type-safe manner.

Consistency is another important principle. The query paths and operations are the same in all implementations and also the Query interfaces have a common base interface.

Querying JPA:
Querydsl defines a general statically typed syntax for querying on top of persisted domain model data.Querydsl for JPA is an alternative to both JPQL and Criteria queries. It combines the dynamic nature of Criteria queries with the expressiveness of JPQL and all that in a fully type-safe manner.

Preparation:

Add the property in pom.xml :
<querydsl.version>4.1.3</querydsl.version>

And add the below dependencies in pom.xml:

<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-apt</artifactId>
<version>${querydsl.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-jpa</artifactId>
<version>${querydsl.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.6.1</version>
</dependency>

And Query DSL Maven Plugin(APT Plugin):

<plugin>
<groupId>com.mysema.maven</groupId>
<artifactId>apt-maven-plugin</artifactId>
<version>1.1.3</version>
<executions>
<execution>
<goals>
<goal>process</goal>
</goals>
<configuration>
<outputDirectory>generated-sources</outputDirectory>
<processor>com.querydsl.apt.jpa.JPAAnnotationProcessor</processor>
</configuration>
</execution>
</executions>
</plugin>


The JPAAnnotationProcessor finds domain types in classpath which are annotated with the javax.persistence.Entity annotation and generates query types for them.

If we use Hibernate annotations instead of JPA annotation in domain types we need to use the APT processor com.querydsl.apt.hibernate.HibernateAnnotationProcessor instead of com.querydsl.apt.jpa.JPAAnnotationProcessor

Generating Query Types:
After adding Maven plugin, if we run the clean compile then classes will be generated in specified outputDirectory(generated-sources).

Adding generated sources to classpath:
If we run the run mvn eclipse:eclipse to update Eclipse project to include outputDirectory as a source folder.

Queries with QueryDSL:
Queries can be constructed based on generated query types in QueryDSL entities, and also function/method(s) are constructed using type-safe manner.
All the QueryDSL entities are extending EntityPathBase class in type-safer manner.

Creating Entity and QueryDsl type:

Lets define one simple entity and we will use the same entity is used in following examples.

Person.java

package org.sample.entity;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Person {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

@Column
private String firstname;

@Column
private String surname;

@Column
private int age;

public Person() {
}

public Person(final String firstname, final String surname) {
this.firstname = firstname;
this.surname = surname;
}

public Person(final String firstname, final String surname, final int age) {
this(firstname, surname);
this.age = age;
}
// setters and getters…
}


QueryDsl maven plugin will generate query type with QPerson with same package name,It contains a static field which will return Person type.

public static final QPerson person = new QPerson("person");

Generated QPerson.java

package org.sample.entity;

import static com.querydsl.core.types.PathMetadataFactory.*;

import com.querydsl.core.types.dsl.*;

import com.querydsl.core.types.PathMetadata;
import javax.annotation.Generated;
import com.querydsl.core.types.Path;

/**
* QPerson is a Querydsl query type for Person
*/
@Generated("com.querydsl.codegen.EntitySerializer")
public class QPerson extends EntityPathBase<Person> {

private static final long serialVersionUID = 1183946598L;

public static final QPerson person = new QPerson("person");

public final NumberPath<Integer> age = createNumber("age", Integer.class);

public final StringPath firstname = createString("firstname");

public final NumberPath<Long> id = createNumber("id", Long.class);

public final StringPath surname = createString("surname");

public QPerson(String variable) {
super(Person.class, forVariable(variable));
}

public QPerson(Path<? extends Person> path) {
super(path.getType(), path.getMetadata());
}

public QPerson(PathMetadata metadata) {
super(Person.class, metadata);
}

}


Building Queries with JPAQuery:

We need to create EntityManager using persistence.xml for retrieving the Database.we can create dataSource and using dataSource we can create a EntityManagerFactoryBean bean.

Persistence.xml:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0">

<persistence-unit name="default" transaction-type="RESOURCE_LOCAL">
<properties>
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.transaction.flush_before_completion" value="true" />
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider" />
</properties>
</persistence-unit>

</persistence>

And Spring Bean configuration:
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" name="EntityManagerFactory">
<property name="persistenceUnitName" value="default"></property>
<property name="dataSource" ref="dataSource"></property>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />
<property name="databasePlatform" value="${db.dialect}" />
</bean>
</property>
</bean>

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.url}" />
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
</bean>

We are externalizing the Database related properties in properties file:
Db.properties:
db.username=sa
db.password=
db.driver=org.hsqldb.jdbc.JDBCDriver
db.url=jdbc:hsqldb:mem:app-db
db.dialect=org.hibernate.dialect.HSQLDialect

We are injecting properties file using PropertyPlaceholderConfigurer bean.
<bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:db.properties</value>
</list>
</property>
</bean>


We need to create JPAQuery using EntityManager.

JPAQuery<Person> query = new JPAQuery<>(em); where em is EntityManager instance.
We need to create Query Entity Object using static variable.
QPerson person = QPerson.person;

Using this query entity we can construct the sql query and we can invoke the and return the entities.
Eg: if we want to query person where person firstname is “Kent”.
query.from(person).where(person.firstname.eq(“Kent”)).fetch();
It will return collection of persons which has firstname is “Kent”.
The from call defines the query source and projection, the where part defines the filter and list tells Querydsl to return all matched elements.
public List<Person> findPersonsByFirstnameQueryDSL(final String firstname) {
final JPAQuery<Person> query = new JPAQuery<>(em);
final QPerson person = QPerson.person;

return query.from(person).where(person.firstname.eq(firstname)).fetch();
}
Like we can add multiple where conditions.
If we want to get all record with given firstname and surname then
query.from(person).where(person.firstname.eq("firstname").and(person.surname.eq("surname")))
.fetch();


Sorting query:
Query entity provides a method orderBy, and we can pass on which property we need sorting and in which order.
Eg: If we want the rows in descending order for surname property then
query.from(person).where(person.firstname.eq(firstname)).orderBy(person.surname.desc()).fetch();

Aggregation using Querydsl:
Query entity provides the select method to specify the aggregate methods, for integers fields it has max, min methods.

Eg: query.from(person).select(person.age.max()).fetchFirst();

Aggregation with GroupBy:
We can groupby elements for property using transform method.

Eg:If we want to groupby using firstname and age then
query.from(person).transform(GroupBy.groupBy(person.firstname).as(GroupBy.max(person.age)));

Testing With Querydsl:

We will create a new person object and we will store and we can search using firstname and firstname and surname both.

Main.java:

package org.sample;

import org.sample.dao.PersonDao;
import org.sample.entity.Person;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class Main {
public static void main(String[] args) {
ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("Spring-Context.xml");
PersonDao personDao = context.getBean(PersonDao.class);
personDao.save(new Person("Erich", "Gamma"));
Person person = new Person("Kent", "Beck");
personDao.save(person);
personDao.save(new Person("Ralph", "Johnson"));
Person personFromDb = personDao.findPersonsByFirstnameQueryDSL("Kent").get(0);
System.out.println(personFromDb.getFirstname());
System.out.println(personFromDb.getSurname());
context.close();
}
}

Conclusion:
In this Tutorial we explained about what is querydsl and how the entities generated using maven plugin and how we can crate the queries using querydsl.

For queries related to querydsl, you can anytime make comments below. Java development India based experts will answer your questions related to querydsl via comments.

We can download the full implementation of tutorial https://github.com/sravan4rmhyd/querydsl.git
JohnnyMorgan 8 september 2016, 9:08

Asp.net MVC developers working with MNC’s have in-depth experience in developing MVC application. In this post, they will explain how to create a sample app with asp.net 5, which will store the data in Azure SQL. They are using Entity Framework and Scaffolding of asp.net MVC in the sample. For more information, read blog further.

In this post we will create a sample application using ASP.NET 5. That application will store the data in the Azure SQL using Entity Framework and Scaffolding of asp.net MVC to support the basic operations (CRUD).

First of all you will need to have access to the Microsoft Azure. For that purpose, you can create a free account. You can get more information about how to achieve that in that link: https://azure.microsoft.com/en-us/free/
Creating a DB in MS Azure

After creating an account (or you may already have one) we have to create a new instance of SQL which will provide space to host our information.

For that you should select (+) in the lateral menu in the left side, and then, on the options that appear in the first column, you should choose an option Data and Storage.

After that select the option in the second column that appears, where we are provided with several data providers to choose from.

Let's select SQL Database, given that we will use that provider in our article.

image

Then, is necessary to configure selected database. In our case let’s give it a name of "sqlAzureDemo" (a name of database).

After creating a database, we need to generate a connection to that database, which will be used in our core 1.0 application. In order to achieve that, if we select recently created database, the way is shown on the image, there is an option "Show Connection String", which will provide us an access to what we need. Let's save that connection string for future references.
Creating a Web App

In order to start creating our application, lets select an option ASP.NET web application. (I'm using the visual studio built in templates for that purpose).

image

First let’s give a name to our application (you can choose the name that is most convenient for you) and then click OK. Now we should be redirected to the new screen where we can choose a template to be used within our application. Since we are developing an application in ASP.NET 5, we have several option to choose between. In our case we must choose the last option that you see on the screen (Web Application).

image

After selecting a template, the application will be generated and configured automatically by visual studio.
Creating a Model for Data

After visual studio is finished to create an application the next step would be to define a model of data to be used. For that in our web project, let's create a new class.

Let's suppose that we are working with vehicles product, so we will define some simple properties to represent our vehicles:

public class Car
{
[Key]
public int Identification {get;set;}
public string Model {get;set;}
public int Weight {get;set;}
}
Adding Scaffolding
Scaffolding will save us a lot of time, providing us with possibility to generate code automatically. We will use scaffolding for our initial CRUD operations. Based on a single class and without writing a single line of code we will be able to generate a controller for our CRUD operations and associated views for each controller.

In the solution select the controller’s directory, and select (clicking with the right button of your mouse) Add New Scaffold Item. The new screen should appear where you should choose “MVC6 Controller With Views Using Entity Framework”.

image

Now you should have your controllers and associated views created automatically.
Adding a Connection String
In order to change the default connection string, copy the connection string you saved previously from Azure and then put it in you appsettings.json file.
Changing Menu in The Layout
In order to be able to test our entity, we will add new item in menu select list, which is located in the layout file. The behavior that we want to achieve is when we click on that menu item we will be provided with data from entity created.
Run an Application
Now you may press the f5 key (or select debug -> start debugging) and the application will be launched in the browser. The new entities created will appear on the screen in the top menu, the same way as you can see on the following image:

image

If you select new Vehicles, we will be able to access the screen that was generated automatically by scaffolding.

If anything is unclear, ask ASP.Net MVC developers straight away. They will respond to your comments on the page itself.
ethanmillar 31 august 2016, 11:11

image

The development team working on PVS-Studio has finally started developing its product for Linux. That was the news that the CTO Andrey Karpov wrote about in the article. Long disputes and requests of the readers on habrahabr.ru, discussions on Reddit, Linux.org and other places can now gain a new round of comments. As it is mentioned in the article, you can volunteer to help the developers to test this product and improve it to a better level.

There are many tasks on the way of PVS-Studio to Linux, that the technical director is talking about. Put briefly, these are:

- more complete support of GCC and Clang;
- a new system of regression tests in Linux, so that you can track the changes results in the analyzer kernel and add new diagnostics;
- compiler monitoring to help programmers quickly and easily check the project without distracting people who support makefiles and the build system in general;
- documentation improvement, so that the user can get information with the examples about any diagnostic;
- testing, distribution, support organization.

In this article you will find more details about the abilities of PVS-Studio for Windows and the tasks it can already solve on Linux.
Kate Milovidova 29 july 2016, 11:49

Often people ask questions - which programming language is easier, which is the most popular, which one to start learning and so on. In this article we will compare two languages Python and Ruby; their reference implementations CPython and MRI, to be exact.

We took the latest versions of the source code from the repositories (Ruby, Python) for the analysis. There weren’t many glaring errors in these projects. Most of them are related to the usage of macros, although this code is quite innocent from the point of view of the developer. But at the same time, such suspicious fragments that occurred because of copy paste, comparing SOCKET type with null, undefined behavior, storing values to the variables that are already used or null pointer dereferencing are really worth reviewing.

Having analyzed all the warnings of general analysis diagnostics and removed all the false positives, we have come to the following conclusion concerning the error density:

image

More details about the code fragments where these suspicious code fragments were found:
http://bit.ly/2a2lLZR

It’s worth saying that despite these flaws, the code is still of high quality. We should also take such factors into account as the size of the codebase , or the fact that some fragments are erroneous only from the point of view of C++ language and they don’t affect the program in any way. That’s why this analysis may be rather subjective, because previously we haven’t evaluated the error density of these projects. We’ll try to do that in the future, so that we can later compare the result of the checks.
Kate Milovidova 22 july 2016, 12:36

The PVS-Studio team have written an interesting article about the ways in which you might shoot yourself in the foot working with serialization, code examples, where the main pitfalls are, and also about the way static code analyzer can help you avoid getting into trouble.

This article will be especially useful to those who are only starting to familiarize themselves with the serialization mechanism. More experienced programmers may also learn something interesting, or just be reassured that even professionals make mistakes.

However, it is assumed that the reader is already somewhat familiar with the serialization mechanism.

We should understand that the statements described in the article are relevant for some serializers, for example — BinaryFormatter and SoapFormatter; for others, which are manually written serializers, the behavior can be different. For example, the absence of the attribute [Serializable] for the class may not prevent serialization and deserialize it with a custom serializer.

Briefly summarizing all the information, we can formulate several tips and rules:

- Annotate the types, implementing the ISerializable interface with the [Serializable] attribute.
- Make sure that all members annotated by the [Serializable] attribute get correctly serialized;
- Implementing the ISerializable interface, don't forget to implement the serialization constructor (Ctor(SerializationInfo, StreamingContext));
- In the sealed types, set the access modifier private for a serialization constructor, in the unsealed — protected;
- In the unsealed types implementing the ISerializable interface, make the GetObjectData method virtual;
- Check that in the GetObjectData all the necessary members get serialized, including members of the base class if there are such.

We hope you will learn something new from this article, and will become a expert in the sphere of serialization. Sticking to the rules and following the tips that we have given above, you will save time debugging the program, and make life easier for yourself, and other developers working with your classes. PVS-Studio analyzer will also be of great help, allowing you to detect such errors right after they appear in your code.

Read more article you can find the link: http://www.viva64.com/en/b/0409/
Kate Milovidova 5 july 2016, 7:57

Nowadays a lot of projects are opening their source code and letting those who are interested in the development of it edit the code. OpenJDK is no exception, programmers PVS-Studio have found a lot of interesting errors that are worth paying attention to.

OpenJDK (Open Java Development Kit) - a project for the creation and implementation of Java (Java SE) platform, which is now free and open source. The project was started in 2006, by the Sun company. The project uses multiple languages- C, C++, and Java. We are interested in the source code written in C and C++. Let's take the 9th version of OpenJDK. The code of this implementation of Java platform is available at the Mercurial repository.

During verification, the analyzer found different errors in the project including: copy-paste, bugs in the operation precedence, errors in logical expressions and in pointer handling and other bugs, which are described in detail in this article.

It's always amusing to check a project which is used and maintained by a large number of people. The better and more accurate the code is, the more safely and effectively the program will work. Those bugs we found, are another proof of the usefulness of an analyzer, as it allows the detection of such errors which would otherwise be hard to detect doing simple code review.
Kate Milovidova 17 june 2016, 9:00
1 2 3 4 5 ...