NVarchar Vs Varchar

Each engineer defining a new string column decides: Do I use nvarchar or do I use varchar?

Since I discovered nvarchar, I’ve always use nvarchar. My thought is, why use a datatype that may not support a text value, and you won’t likely discover an incompatibility value until it’s in production.

I hear the argument about space, but space is cheap and not worth worrying about. I know what you’re thinking, the cost doesn’t matter when the hard drive is full, and I agree.

Starting with Sql Server 2008 R2 data compression is applied to nchar and nvarchar (nvarchar(max) is excluded) fields. Depending on the data the effectiveness of the compression varies, but with English, there is a 50% compression, which puts it on par with the varchar’s space needs (1).

Something else to consider is most programming languages support UTF-16 as the string type. So each time a varchar is loaded from the database, it’s converted to UTF-16 (nvarchar-ish)

This StackOverflow answer sums up nvarchar vs. varchar:

An nvarchar column can store any Unicode data. A varchar column is restricted to an 8-bit codepage. Some people think that varchar should be used because it takes up less space. I believe this is not the correct answer. Codepage incompatabilities are a pain, and Unicode is the cure for codepage problems. With cheap disk and memory nowadays, there is really no reason to waste time mucking around with code pages anymore.

All modern operating systems and development platforms use Unicode internally. By using nvarchar rather than varchar, you can avoid doing encoding conversions every time you read from or write to the database. Conversions take time, and are prone to errors. And recovery from conversion errors is a non-trivial problem.

If you are interfacing with an application that uses only ASCII, I would still recommend using Unicode in the database. The OS and database collation algorithms will work better with Unicode. Unicode avoids conversion problems when interfacing with other systems. And you will be preparing for the future. And you can always validate that your data is restricted to 7-bit ASCII for whatever legacy system you’re having to maintain, even while enjoying some of the benefits of full Unicode storage. (2)

My conclusion is the only time the data is a varchar is when it’s at rest.

References:

1. Unicode Compression implementation
2. What is the difference between varchar and nvarchar?

Changing a React Input Value from Vanilla Javascript

The short answer:

function setNativeValue(element, value) {
    let lastValue = element.value;
    element.value = value;
    let event = new Event("input", { target: element, bubbles: true });
    // React 15
    event.simulated = true;
    // React 16
    let tracker = element._valueTracker;
    if (tracker) {
        tracker.setValue(lastValue);
    }
    element.dispatchEvent(event);
}

var input = document.getElementById("ID OF ELEMENT");
setNativeValue(input, "VALUE YOU WANT TO SET");

Reference: https://stackoverflow.com/a/52486921/17360

The long answer:

React overrides the native Javascript onChange behavior. Triggering an onChange event does nothing to change the input field value in React’s eyes. To React the value is still unchanged, even though to a user the value can clearly be seen on the screen. The above code triggers the change in React also.

When to Use The FromService Attribute

I recently discovered the [FromServices] attribute, which has been a part of .Net Core since the first version.

The [FromServices] attribute allows method level dependency injection in Asp.Net Core controllers.

Here’s an example:

public class UserController : Controller
{
    private readonly IApplicationSettings _applicationSettings;

    public UserController(IApplicationSettings applicationSettings)
    {
        _applicationSettings = applicationSettings;
    }

    public IActionResult Get([FromService]IUserRepository userRepository, int userId)
    {
        //Do magic
    }
}

Why use method injection over constructor injection? The common explanation is when a method needs dependencies and it’s not used anywhere else, then it’s a candidate for using the [FromService] attribute.

Steven from StackOverflow posted an answer against using the [FromService] attribute:

For me, the use of this type of method injection into controller actions is a bad idea, because:

– Such [FromServices] attribute can be easily forgotten, and you will only find out when the action is invoked (instead of finding out at application start-up, where you can verify the application’s configuration)

– The need for moving away from constructor injection for performance reasons is a clear indication that injected components are too heavy to create, while injection constructors should be simple, and component creation should, therefore, be very lightweight.

– The need for moving away from constructor injection to prevent constructors from becoming too large is an indication that your classes have too many dependencies and are becoming too complex. In other words, having many dependencies is an indication that the class violates the Single Responsibility Principle. The fact that your controller actions can easily be split over different classes is proof that such controller is not very cohesive and, therefore, an indication of a SRP violation.

So instead of hiding the root problem with the use of method injection, I advise the use of constructor injection as sole injection pattern here and make your controllers smaller. This might mean, however, that your routing scheme becomes different from your class structure, but this is perfectly fine, and completely supported by ASP.NET Core.

From a testability perspective, btw, it shouldn’t really matter if there sometimes is a dependency that isn’t needed. There are effective test patterns that fix this problem.

I agree with Steven; if you need to move your dependencies from your controller to the method because the class is constructing too many dependencies, then it’s time to break up the controller. You’re almost certainly violating SRP.

The only use case I see with method injection is late-binding when a dependency that isn’t ready at controller construction. Otherwise, it’s better to use constructor injection.

I say this because with constructor injection the class knows at construction whether the dependencies are available. With method injection, this isn’t the case, it’s not known if the dependencies are available until the method is called.

In Code

C# 8 – Nullable Reference Types

Microsoft is adding a new feature to C# 8 called Nullable Reference Types. Which at first, is confusing because all reference types are nullable… so how this different? Going forward, if the feature is enabled, references types are non-nullable, unless you explicitly notate them as nullable.

Let me explain.

Nullable Reference Types

When Nullable Reference Types are enabled and the compiler believes a reference type has the potential of being null, it warns you. You’ll see warning messages from Visual Studio:

And build warnings:

To remove this warning, add a question mark to the back for the reference type. For example:

public string StringTest()
{
    string? notNull = null;
    return notNull;
}

Now the reference type behaves as it did before C# 8. 

This feature is enabled by adding  #nullable enable   to the top of any C# file or adding lt;NullableReferenceTypes>true</NullableReferenceTypes> to the .csproj file. Out of the box it’s not enabled, which is a good thing if it was enabled any existing code-base would likely light up like a Christmas tree.

The Null Debate

Why is Microsoft adding this feature now? Nulls have been part of the language since, well the beginning? Honestly, I don’t know why. I’ve always used nulls, it’s a fact of life in C#. I didn’t realize not having nulls was an option… Maybe life will be better without them. We’ll find out.

Should you or should you not use nulls? I’ve summarized the ongoing debate as I understand them.

For

The argument for nulls is generally that an object has an unknown state. This unknown state is represented with null. You see this with the bit data type in SQL Server, which has 3 values, null (not set), 0 and 1. You also see this in UI’s, where sometimes it’s important to know if a user touched a field or not. Someone might counter with, “Instead of null, why not create an unknown state type or a ‘not set’ state?” How is this different than null? You’d still have to check for this additional state. Now you’re creating unknown states for each instance. Why not just use null and have a global unknown state? 

Against

The argument against nulls is it’s a different data type and must be checked for each time you use a reference type. The net result is code like this:

var user = GetUser(username, password);

if(user != null)
{
    DoSomethingWithUser(user);
} else 
{
    SetUserNotFoundErrorMessage()
}

If the GetUser method returned a user in all cases, including when the user is not found. If the code never returns null, then it’s a waste guarding against it and ideally, this simplifies the code. However, at some point, you’ll need to check for an empty user and display an error message. Not using a null doesn’t remove the need to fill the business case of a user not found.

Is this Feature a Good Idea?

The purpose of this feature is NOT to eliminate the use of nulls, but to instead ask the question: “Is there a better way?” And sometimes the answer is “No”.  If we can eliminate the constant checking for nulls with a little forethought, which in turn simplifies our code. I’m in. The good news is C# has made working with nulls trivial.

I do fear some will take a dogmatic stance and insisting on eliminating nulls to the detriment of a system. This is a fool’s errand, because nulls are integral to C#.

Is Nullable Reference Types a good idea? It is, if the end result is simpler and less error prone code.