Dec 11, 2018

16 Blockchain Disruptions [Infographic]

Blockchain is one of the most revolutionary technologies to emerge in recent years. Many experts believe it will change our world in the next 20 years, as much as the Internet has over the last two decades. Let's find out how businesses from top industries use blockchain technology.



[Via BitFortune]

Dec 10, 2018

Bitdefender Antivirus Overview - Free Version vs Plus Version

If you don't have much experience with IT security programs then you need a solution with a user-friendly interface, like Bitdefender Antivirus. Not only is it easy to use, it's very reliable. The free version alone offers plenty of protection against viruses and malware. This is a fast program that can run at max speed without slowing your system down.

All you need to do in order to use the free version is to create an account to activate it with, which is a simple process that requires only your name and email. The lack of settings and complex configuration options means that the program is easy to install. You should have it up and running within minutes.

The free version of Bitdefender Antivirus alone offers the following benefits:

  • Fast installation and scanning, without slowing down the computer
  • Powerful protection in a light solution
  • Reliable, on-demand and on-access scanning
  • Essential protection through a minimalistic approach
  • No lag or annoying ads out of the blue
  • Automatic, real-time protection
  • Safe browsing and anti-phishing

There is a button to click on for a full system scan and a drag and drop spot for scanning specific folders or files. You can also view a timeline of recent activity,, which appears during a scan's progress.

Its anti-phishing capabilities are next-to-none. You can trust that you are getting the best protection possible when you shop online, pay bills, and do banking.

As great as the free version is, however, it's still not as great and comprehensive as Bitdefender Antivirus Plus. With the plus version, you can run your security from a mobile device. It has superior cyber-threat detection and multi-layer ransomware protection to keep all of your files safe.

Bitdefender Antivirus' Defense Against Ransomware

Ransomware is becoming a bigger problem this day and age, so the average person needs as much protection as he or she can get. Bitdefender Plus uses behavioral threat detection to keep your important documents safe from ransomware encryption. You'll have the peace of mind that your money and data are never compromised.

Another benefit of the Plus version is that it offers a Rescue Mode to prevent sophisticated viruses, such as rootkits, from affecting your system. When Bitdefender detects such threats it will reboot your PC in Rescue Mode to clean-up and restore your files properly.

Bitdefender Antivirus reviews praise this product's ability to protect PCs and mobile devices. There are other versions available for MAC users as well. It's recommended that you at least give the free version a try before deciding if this is the ideal antivirus and anti-ransomware solution for you.

You won't have to spend much money when investing in the Plus version, thanks to Bitdefender Antivirus promo codes. This antivirus software has won a lot of awards for its performance and fast-scanning technology. Compare Bitdefender coupon offers when shopping online.

To find out more about antivirus software, internet security suites, and VPNs along with coupons, discounts, and special offers, visit George's website (Best PC Security).


Dec 7, 2018

How to Become CompTIA Security+ Certified

This vendor-neutral certification is popular among IT enthusiasts who have some prior experience in IT administration and want to shift their focus to security. Mobile devices and cloud computing have changed the way that business is done. Hence, with the massive amount of data transmitted on networks, security has become an essential part of any organization. This certification validates the skill-set required to make the data flow safer and deter hackers.

Objectives of CompTIA Security+ Certification

The primary objective of this certification is to validate that the student is proficient in security measures to be adopted to deter network attacks. The exam aims to confirm that the examinee has adequate information about the following fields:
  • Competence in Network Security
  • Proficiency in Cryptography
  • Knowledge of identity management and access control
  • Ability to detect threat and vulnerabilities
  • Ability to secure application, data and host information
  • Ability to create the infrastructure to cater to security breaches
  • Ability to anticipate security risks and guard against them
  • Ability to react to security breaches

Syllabus of the CompTIA Security+ Certification

Before a candidate starts the preparation of the CompTIA Security+ exam, it is expected that the candidate is well versed in aspects of IT administration with a great focus on security. It is also expected that the candidate has a broad knowledge of the implementation of security measures. An inherent interest in the field of Network Security is an added advantage that lets candidates obtain the certification with ease.

Scope and Benefits of the CompTIA Security+ certification

Data networks are becoming more important each passing day. They are the backbone of all kinds of companies around the world. A person equipped with the knowledge of network security and with a certification in CompTIA Security+ certification will be able to secure these data networks and mitigate possible risk. There are following benefits of obtaining certification;

1. Globally recognized

This certification is a well-known in over 147 countries around the world. Once a candidate obtains the certification, he/she is capable of getting career opportunities related to IT security across the globe.

2. Earning potential

The earning potential associated with the CompTIA Sec+ is varied. Security Specialists, Administrators, and Security Managers are more in need today than ever before.

3. Vendor-neutral

This certification is a vendor-neutral certification which allows one to understand the basic concepts behind IT security without being tied to one specific brand. Obtaining this certification means that a person can work with a variety of software, hardware, and network configurations. This enables you to use your expertise in the security industry without being tied to a specific brand or network architecture.

4. Industry supported

The Security+ examination, its syllabus and examination questions are developed and maintained by experts in the field of IT security. The content of the syllabus is presented after in-depth survey feedback and contributions from a massive number of industries. Hence, having the certification means that the candidate is recognized as someone who can work in most industries.

Job roles after CompTIA Security+ certification

There are a considerable number of job opportunities available after the completion of the CompTIA Security+ certification. The job opportunities available are:

  • Network Administrator
  • Systems Administrator
  • Security consultant
  • Security Specialist
  • Security Architect
  • Information Assurance Technician
  • Security Manager
  • Security Engineer


NOTE: If you are pursuing your Security+ or any other certifications, you can get a 45% off discount on test materials from PrepAway by using promo code GP264719 at check out! 

Dec 6, 2018

Your Brain on Passwords [Infographic]

Remembering a myriad of passwords or passphrases can be very difficult. It's one of the reasons that I started using LastPass years ago. You only have to remember one password when you use LastPass, and that is the password to access your LastPass vault. Every other password you need in the world can then be randomly generated and stored there keeping you secure!

On top of that, LastPass can be used on multiple workstations for free! If you want to take it with you on your mobile device, you'll have to upgrade to premium for the bargain price of $10 per year.

Anyway, the folks at LastPass put together this interesting infographic that shows how your brain works when it comes to passwords. Check it out!



Dec 5, 2018

I think I will be sticking with CompTIA certifications

I have been in the IT business for 14 years now. I know this because my now ex-wife was pregnant with my daughter right before I took my first job in IT. Well my daughter's 14th birthday is right around the corner. You do the math.

Since that time I've completed two bachelors degrees in computer networking and network security, and have earned several IT certifications including Microsoft, VMWare and of course CompTIA.

Early on I started with CompTIA certifications just to get my basics down. I quickly earned my A+, Network+ and Server+. After those, I started working on Microsoft certifications and eventually earned my MCSA, then turned around and got VMWare certified.

All of that was great, but really when it comes down to it, I'm a technology generalist. I don't just use Microsoft. I don't just use VMWare. I use a little bit of everything! That's why I've always liked CompTIA certifications. They are not vendor specific! The stuff you learn while studying them are good to know no matter what platform you are using!

Last year I finally got Security+ certified, which brought me back into the CompTIA fold, and just last week I earned my Cloud+ certification. With CompTIA's stackable certifications now, that makes me a CompTIA Secure Cloud Professional (CSCP).


Now I mentioned that I had already taken A+, Network+ and Server+. I took those back when CompTIA's certifications didn't expire, so if you look at my transcripts, CompTIA still recognizes me as being certified in those areas. The problem though is with their new stackable certification program, they don't recognize those certifications as being stackable apparently... I guess that means it's time to renew! At least, I'll renew my Network+ and Server+. I'm not too worried about A+ at this point in my career.

When I renew Network+ and Server+, that will make me  CompTIA Network Infrastructure Professional (CNIP) and CompTIA Cloud Admin Professional (CCAP) certified! 

On top of that, next year they are releasing a brand new single exam Linux+ certification. I plan to take that exam as well shortly after they do. With that and my renewed Network+, I'll be CompTIA Linux Network Professional (CLNP) certified!

I figure after that, I just need to keep these five individual certifications up to date, and thereby keep my stackable certifications up to date and I'll be good to go until I decide to retire.

Early on in my career, I might not have seen things this way, but at this point in my career I would rather keep things simple. I feel like my experience speaks for itself, and just keeping these certifications will just be gravy.

What do you think about this approach? Do you agree or disagree? Do you think it's still important to get vendor specific certifications if you are a technology generalist? Why or why not? Let me know what your thoughts are in the comments.

On a related topic, if you too are pursuing CompTIA or any other certifications, you can get a 45% off discount from PrepAway by using promo code GP264719 at check out! I thought I'd throw that in there.

Dec 4, 2018

If you can't beat 'em, join 'em! Microsoft to dump Edge and create a Chromium based browser!

Well, it's official, Microsoft is reportedly throwing in the towel on their latest attempt at browser relevance and are giving up on the widely unpopular Edge browser! It's kind of ironic too, since it was Microsoft that won the original browser wars of the late 90's when they killed off Netscape Navigator by installing Internet Explorer by default in Windows.

What goes around comes around I guess right? I mean, Microsoft was king of the heap until Firefox and eventually Chrome came around. Well, apparently Microsoft is officially throwing in the towel and will now be making their next browser based on Chromium, the engine that powers Chrome!

From BI:
...Microsoft is moving away from its own EdgeHTML rendering engine and towards Chromium, the web engine that powers Google Chrome. Chromium, first released by Google in 2008, has become the web's predominant standard, thanks to the wild success of the Chrome browser. 
The success of Chromium has become something of a headache for Microsoft, both internally and externally — the Verge reports that employees and customers alike have been "frustrated" that the Microsoft Edge browser doesn't work properly with some websites and apps that were optimized for Chromium. 
And so, it sounds like Microsoft is poised to release a new browser, based on Chromium, that would leave EdgeHTML in the past. Intriguingly, the Verge reports that this move would also open the door for a version of Google Chrome on the Windows app store — the main thing stopping that from happening, so far, is that Microsoft has required all web browsers in the Windows Store to use EdgeHTML. If EdgeHTML goes, so too will that barrier.
What do you think about this? Do you think Microsoft will finally give us a browser worth using? Let us know what you think in the comments!

Oct 24, 2018

How to Deal With an Overheating Smartphone

What causes overheating? You’ve felt it before, on more counts than you’d actually care to remember. How many times have you used an electric device and felt it heating up during usage? It happens to everything from computers and phones to kitchen appliances. But we’re kind of use to the notion of devices with moving parts or computers working on demanding tasks heating up, but overheating phones often catch people off guard, especially if the temperature goes beyond the safe range. Then it’s not a case of your phone being unpleasant to use or slow - it can damage your device or even cause it to catch fire or actually explode. So let’s take a deeper look at the nature of this issue and how to solve it.

Common Causes

Our phones use electricity to run all systems and enable the central processing unit (CPU) to carry out the necessary functions. When activities housed in the central system-on-a-chip (SoP) become overloaded, the CPU slows down hence the occasional long waits for phones to process data and perform operations. This is especially common in android phones with a lower RAM memory capacity. This also has a side-effect: CPU generates heat as it works. Bigger the workload = more heat. Many of us are used to this notion with our computers, but in those we have fans or even intricate liquid cooling systems. But with phones we went a long way to make them as compact as possible, which makes venting excess heat harder.

Graphically intense apps or apps requiring continuous and difficult calculations, such as high-end video games, HD video streaming services and so on, can easily overload your system and cause your phone to overheat. Special case must be made for viruses and malware - viruses and miners abuse your phone’s resources for their malicious intents even if you’re not doing anything demanding with your phone.

Other than overloading the CPU through performing different actions and operations, there are also a number of other reasons known to cause overheating to phones. Heating issues might be brought through the external environment in relation to environmental temperatures. When the phone is directly exposed to sunlight or extreme temperatures, you might receive a warning message to let your phone cool down. Though unless your phone has other heat-related issues, this would be solved by simply moving your phone.

The heat may also come from different parts, such as bluetooth or WiFi modules. When using them for a long time they’re known to heat up as well - not to a dangerous level, but they can contribute to overall issue. A very special case is having a faulty battery - those can cause the worst and most damaging cases of overheating, not only when the phone is used, but also while it’s charging.


How to Deal With an Overheating Phone

Now that we’ve addressed the question “why does your phone get hot”, it’s about time to look at measures to take to prevent and/or grapple with an already overheated device? Is your iPhone really hot and you don’t know what causes the overheating? Is your phone battery getting hot on a frequent basis and you still haven’t stemmed out the root of the problem? Hope the following tips help you tussle with your situation.

It’s always advised not to allow our devices to stay for too long on the chargers. As a matter of fact, experts advise phone buyers to only charge their phones to 80% during the day and only extend to 100% at night when the phones aren’t in use.Better to stick with 80% at all times for increased longevity of your battery life. In the same effect, don’t let your battery life drop too low reaching critical levels - while not critically bad, it’s not optimal.

When charging a phone, place it on top of a hard and cool surface as opposed to a sofa or bed. This is because the ‘clothed’ surfaces will only act to trap heat which worsens the situation. Similarly, if your phone got too heated for any reason don’t hurry to hide it in your pocket or bag - being exposed to air is the best solution here. But don’t place it into an actively cold place, like a fridge. Rapid changes of temperature put strain on the materials and can damage them - plus it may cause moisture to manifest on parts of the device. Just place your device in an open space away from sunlight. If it’s possible you might also want to remove its panels.

Dealing with the main offender - the CPU - might require a bit more of a personal touch. You need to look over how you use your device, how many apps you have running at the same time, how many connection methods are on (cellular, bluetooth, WiFi, 3G, 4G). When playing a game that requires a ton of processing from the SoP you might want to turn off some working in the background. Remember to reboot your phone from time to time. Also remember your cybersecurity tips and check your phone for possible malware if it might be at fault. Updating your apps might also help, as the developers might patch out certain bugs that overworked your phone for no reason.

But always remember than you can only do so much on your own. You can check online if other people with your phone model are having the same issues. If you’re using your phone responsibly but keep having heat-related issues that are not common for your device, then it’s time to contact the manufacturer or the service centre. You might have a defective battery or other hardware and the sooner you take care of it the better.

Conclusion

Future trends in phone technology are promising as newer models are being integrated with ‘internal-cooling-pipes’ that run to the processing unit. These trends will ensure the issue of overheating phones is a thing of the past. Until such models are actually a reality, the safety measures above will go a long way to secure your phone from getting easily overheated. Last but not least, you might also consider a phone upgrade if you are the ‘all-time surfer or gamer’ and just can’t help it.

Since that’s done with, what are the other annoying menaces that you’ve found out drains your battery life? Join the discussion in the comment box as we share more remedies on the same.


Oct 7, 2018

Update for Xen 7.1+ - STOP: 0x0000007B BSOD After Restoring UrBackup Image to XenServer VM

A few months ago I posted about getting a STOP: 0x0000007B blue screen of death on one of my VMs after restoring an image backup from UrBackup in Xen 6.5. My solution then was to create the blank VM that we were restoring to using a Windows XP template.

Well, the other night I was migrating all of my old Xen 6.5 VMs to a new Xen 7.1 cluster, and that troublesome VM popped up again! I got another BSOD when I powered it up in the new cluster!



The trouble this time is that Xen 7.1 doesn't have a Windows XP template! Damn it!

No problem, I did find a solution. If you are getting this error for one of your VMs after moving, upgrading or restoring to Xen 7.1 or newer just use "Other install media" template located at the bottom of the templates list.


After using that template, and attaching the original disk it booted up just fine!

Sep 26, 2018

SQL Query to see how long DBCC CHECKDB will take

Last night while converting a VMWare VM to a XenServer VM I had a little bit of an issue with one of the database VMs, and several of the databases came up as "Suspect."

We decided to follow this procedure here (How to fix a Suspect Database) and it went fairly quick except on the biggest database that was almost 100GB in size.

Well, we wanted to know how long it would take for the DBCC CHECKDB to finish! I'm sure you are here because you are in the same position. Well, here is a query that will give you an estimated completion time so you have a rough estimate on how long it will take:

 SELECT session_id ,  
 request_id ,  
 percent_complete ,  
 estimated_completion_time ,  
 DATEADD(ms,estimated_completion_time,GETDATE()) AS EstimatedEndTime,   
 start_time ,  
 status ,  
 command   
 FROM sys.dm_exec_requests  
 WHERE database_id = <YOUR DATABASE ID NUMBER>  


Fairly simple right? Your output will look like this:


If you are wondering how to find your database_id you can find it by running this query:

 Use <DATABASE NAME>  
 Select DB_ID() AS [Database ID]  
 GO  

Again, fairly simple right? I hope this helped!

Sep 17, 2018

Getting Fog PXE boot working on a Thinkpad T460P, T470P and a T480P

I've been using Fog Project for years. It's my favorite open source operating system imaging tools for large networks. We were using it at my company up until a few years ago when we started buying Thinkpad T460P laptops and my desktop technician at the time couldn't get these laptops to boot. Instead of doing some actual Googling he and my Systems Administrator at the time wanted to use WDS instead.

Well both of those guys have since moved onto other places, and I decided that we were going to save a Windows server license and go back to Fog!

The first thing I had to do was figure out how to get the T460P's, T470P's and now T480P's to boot up to the Fog boot menu. When I first tried booting my T460P, this is the message I received:


Long story short, it got stuck saying No configuration methods succeeded.... Boo!

Well the fix was actually pretty easy. Instead of using the undionly.kpxe tftp file like the documentation says, we used intel.kpxe instead and it worked like a charm! Now we get the Fog boot menu on all models of our Lenovo laptops!

Have you had problems with Lenovo and Fog? What did you have to do to get it to work? Let us know in the comments!

Sep 10, 2018

Active Directory Users and Computers Will Not Open After Azure Site Recovery Test Failover

The other day we wanted to test some database stuff in our Production Azure environment. Obviously, we didn't want to mess with actual Production data, so since we're using Azure Site Recovery for our disaster recovery plan, we decided to initiate a test failover of the impacted systems in an isolated network.

Also, since we're using our own domain controller VMs, we had to fail those over for authentication. This is where I ran into problems. After initiating the test failover of my domain controllers I couldn't open Active Directory Users and Computers. When I tried, I got this message:
Naming information cannot be located because: The specified domain either does not exist or could not be contacted. Contact your system administrator to verify that your domain is properly configured and is currently online.


Well, after banging by head on the wall for a few hours, I finally found a solution. Open a registry editor and browse to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters

Open the SysvolReady key. If the value of the key is 0 change it to 1. If the value is 1 change it to 0 and ‘Accept’, again change to 1 and accept. Exit registry editor.

Boom! After that I could open Active Directory Users and Computers again without a reboot!

One thing that still didn't work though was Netlogon and Group Policy. To fix that on my two domain controllers in the test environment I had to copy all contents from C:\Windows\SYSVOL\domain\NtFrs_PreExisting___See_EventLog on both domain controllers to C:\Windows\SYSVOL\domain\. When that was done I ran the following on both test domain controllers:

  • net stop netlogon
  • net start netlogon
After that, Netlogon and Group Policy were working again. I also took the extra steps seizing FSMO roles and deleting the other domain controllers from Active Directory Users and Computers, as well as Active Directory Sites and Services along with their sites. That way I wouldn't have to deal with replication issues in the isolated test environment.

Have you ever ran into something like this? Did you fix it differently? Let us know in the comments!

Aug 31, 2018

Alternative Download For HP Proliant SPP

Is it just me or should hardware manufacturers make their drivers easy to download regardless of support contracts? I've been a loyal HP server user for years, but just recently something really chapped my ass! I went to download the latest Service Pack for Proliant (SPP) so I could install drivers on an older Proliant system and couldn't! Why? Because I didn't have a current support contract with HP!

I've also been a loyal Lenovo user for years. Guess what? I can download their System Update tool fine! No need to have some bullshit login for it! In the past I could always download HP's SmartStart CD's without a login. Why now all of a sudden is there a change?

Now you're probably just saying, why not buy a support contract? Well, I already have full hardware support from our aftermarket re-seller Curvature at a fraction of the cost of HP's support. I don't feel the need to pay extra for roughly the same level support. The only draw back, at least for HP, is that I can't get tools like SPP!

Well, I found a good Samaritan that is making the downloads for the SPP available for free!  At the time of this writing the March and June 2018 versions of SPP are available here.

Hurry up and grab them before they are gone!

Aug 28, 2018

End of an Era: Coleman University is Out of Business



This is a real shame. I myself am a Coleman Alumnus. I just heard the news while interviewing someone for my company's open Systems Administrator position in San Diego.

Via Fox 5:
Coleman University -- a private college that's operated in San Diego since 1963 -- is closing at the end of the current term, school leadership announced Thursday. 
"To all our very fine students, staff, and faculty, I am personally sorry that we have to close Coleman University," President & CEO Norbert J. Kubilus said. 
In a letter to students, faculty and staff obtained by FOX 5, Kubilus said that Coleman learned in late June that they had lost a bid for accreditation from the Western Association of Colleges and Universities Senior College and University Commission, putting the school in a financial bind.
Continue Reading


Aug 27, 2018

Shadow Admins: What Are They and How Can You Defeat Them?

Managing something you don’t even know exists in your network is always a challenge. This is why the problem of stealthy or shadow admins needs to be acknowledged by security officers. after all, it only takes compromising a single account with elevated privileges to put the security of an entire company in jeopardy.

So, who are these shadow admins and what strategies may help you combat the threats they pose? Keep on reading to find answers to these questions.

Shadow admins: what are they?

When talking about the shadow or stealthy admins, we are referring to the accounts that were delegated admin-level privileges in Active Directory, usually with a direct permission assignment. This is why these shadow admins can also be called delegated admins.

In general, there are four main groups of privileged accounts:

  • Domain admins
  • Local admins
  • Application/services admins
  • Business privileged accounts

Any of these categories may have both legitimate and shadow administrative accounts. However, while legitimate privileged accounts are easy to identify, stealthy admins are not members of any of the default administrative groups in Active Directory and, therefore, can’t be found that easily. As a result, many organizations simply don’t take delegated admins into account when looking for privileged users in Active Directory.

Ignoring delegated admins is not an option though. These accounts can possibly have unrestricted control over legitimate Active Directory admins and be able to:

  • change passwords for privileged accounts
  • change permissions on the existing admin groups or accounts
  • add new accounts to the existing administrative groups
  • create new admin groups in Active Directory, and so on.

Therefore, a successful attack on just one delegated admin account can have consequences just as devastating as when a legitimate privileged account was compromised.

Let’s take a closer look at the main risks posed by shadow admins.

Top risks posed by unmanaged admin accounts


The presence of stealthy administrators in your network creates a variety of problems, including:

  • Cybersecurity risks
  • Financial risks

Unmanaged privileged accounts are like a Christmas gift for the attackers. Since they are often not taken into account by an organization’s cybersecurity policy, they can be easier to compromise while still providing the attackers with unrestricted access to your company’s critical data.

With the increased risks of data leakage, the presence of shadow admins in the network creates additional financial risks for the company. Not to mention that the news about the loss of valuable, sensitive data can cause severe damage to the company’s reputation.

In April 2017, for instance, Oracle’s Solaris operating platform was targeted by hackers using shadow admins to get into the system. In particular, there were two malicious programs discovered (EXTREMEPARR and EBBISLAND) that were able to elevate the rights of existing users to the administrative level. Thus, they turned regular users into shadow admins with remote root access to platform networks.

The only way to mitigate risks posed by such accounts is by identifying all shadow admins within your network and managing them effectively. In the next section, we talk about the ways you can find and manage all administrative accounts in your company’s network.

Best practices for detecting and managing shadow admins

As of today, there are two ways you can detect delegated admins in your network and mitigate the risks they pose:

  • By analyzing Access Control Lists (ACLs) on Active Directory
  • By building an effective privileged access management strategy

ACLs analysis. When trying to identify all of the privileged accounts present in your company’s network, look for the tools that scan ACLs and analyze effective permissions rather than an account’s presence in a particular Active Directory group. Thus, you’ll be able to find even the accounts that were delegated additional privileges without being added to any of the admin groups on Active Directory.

Once identified, make sure that only legitimate administrators (such as members of Domain Admin groups) are granted such critical privileges as Replicating Directory Changes All, Reset Password, or Full Control.

Privileged access management. Building a well thought out privileged access management strategy can also help you solve the problem of stealthy admins. Your cybersecurity strategy should include two measures:

  • Continuous monitoring and audit of the network
  • Effective management of privileged access to critical data and assets


Audit and monitoring are important for several reasons. First and foremost, it ensures a better level of visibility within the network: you gain the knowledge about who can access what. Secondly, all information gathered at this stage is essential for investigating security incidents should any of them take place in your organization.

When monitoring your network, pay special attention to the following factors:

  • What accounts have elevated privileges and can access your company’s critical assets (who can access particular servers or domains, who can work with your company’s sensitive information)
  • What privileged accounts and elevated permissions were added just recently (to identify a possible attack in progress)
  • If there’re any suspicious activities (a sudden use of a “dead” privileged account, an admin logging in from an unusual IP address, and so on)


Ensuring an appropriate level of privileged access management is the second step in building an efficient cybersecurity strategy and combating shadow admins. Once you know who can access your company’s valuable data, you can take necessary measures to either secure or dismiss these accounts. Consider implementing the least-privilege approach for all privileged accounts and assigning any elevated permissions only on an “as needed” basis.

When looking for an efficient solution to these problems, turn your attention to Ekran System. It’s a universal platform for monitoring, auditing, and managing both regular and privileged users. This platforms gives you a full visibility into your network and allows taking proactive measures for preventing privilege misuse at any level.

Conclusion

Delegated or shadow administrative accounts can pose a serious threat to an organization’s cybersecurity when remaining undiscovered. However, identifying stealthy admins isn’t enough – you need to manage them effectively in order to mitigate any cybersecurity and financial risks they can pose. While ACLs scanning works well for discovering accounts with elevated permissions, the only way you can effectively manage and secure these accounts is by implementing an appropriate level of Privileged Access Management.

Aug 24, 2018

Sandbox-Evading Malware Are Coming: 7 Most Recent Attacks

Nowadays, anti-malware applications widely use sandbox technology for detecting and preventing viruses. Unfortunately, criminals are developing new malware that can evade this technology. If such malware detects the signs of VM environment, it remains inactive until they are outside of the sandbox. Experts predicted that in 2018 we would see an increasing number of cyber attacks performed with sandbox-evading. However, the epidemic has actually started two years ago. Let's look at the most recent attacks that were successful because modern security solutions weren't able to detect sandbox-evading malware.

1. Grobios

Since early March 2018, there have been cases of attacks performed with the RIG Exploit Kit that infects victims with a backdoor trojan called Grobios. This malware is packed with PECompact 2.xx that allows it to evade static detection. Though the unpacked file has no functions, it uses hashing to obfuscate the names of API functions it invokes. It also divides the PE header of the DLL files to match the name of a function to its hash. In addition, the trojan performs a series of checks to become aware of its environment. Particularly, it looks for virtual machine software, like Hyper-V or VMWare, a username with the words "malware", "sandbox", or "maltest", and compares the driver names with its blacklist of VM drivers.

2. GootKit

This banking trojan attacks users mainly in Europe through spam sent via MailChimp since 2017. It steals the credentials of bank’s customers and manipulates their online sessions. Before installation, the malware uses a dropper to become aware of its environment. Thus, the dropper looks for specific names in the Windows Registry and virtual machine resources on disk. It also checks the device’s BIOS to discover whether there is a virtual machine client installation and examines the machine’s MAC address. If the dropper doesn't find any signs of the sandbox, the virus payload is executed and GootKit trojan carries out additional checks, like looking for hard drives, CPU names that confirm a physical machine, and virtual machine values.

3. ZeuS Panda

This is another banking trojan that uses environment-aware techniques to skip the sandbox. Its main goal is stealing user’s banking credentials and account numbers by implementing “man in the browser” attack. In order to infect a targeted computer, it changes the browser security settings and alarms. After loading, the trojan checks for indicators of the sandbox environment, like the presence of Sandboxie, ProcMon, SoftICE debugger, and other tools. In 2018, ZeuS Panda targeted banks in Japan, Latin America, the United States, as well as popular websites like YouTube, Facebook, and Amazon.

4. Heodo

Heodo is a banking trojan that was first detected in 2016 and subsequently was used in a 2017 attack against the US bank clients. This malware infects victims through invoice emails from a known contact that contains an attached PDF file. After a user clicks on the attachment, the trojan is loaded. It uses a technology known as a crypter that allows the malware to hide from the sandbox environment. Heodo imbeds itself within the software that is already installed on the infected computes and makes mutated copies of itself on the infected system.

5. QakBot Trojan

A massive attack with the QakBot Trojan was detected in 2017 when the malware caused the lockouts of Active Directory users from their company's domain by stealing user credentials. This malware infects victims with a dropper that uses delayed execution to evade the sandbox. It loads to the targeted computer and waits for 10 to 15 minutes before its execution. While antivirus sandboxes analyze newly loaded files for a short period of time, the dropper remains undetected.

6. Kovter

This trojan was initially developed as a police ransomware, but in 2017 it was detected as a fileless malware that can easily bypass the sandbox detection. It infects victims via a malspam email with an attachment that contains macros for Microsoft Office files or a .zip attachment that contains infected JavaScript files. By using the Windows registry, Kovter leaves the sandbox undetected. Victims are requested to pay a $1,500 ransom in Bitcoin.

7. Locky

Locky is a classic example of environment aware malware that was released in 2016. It was spread during an email campaign that contained an infected Microsoft Word document. The document had a malicious macros that saved and run a binary file that downloads the encryption trojan. This malware easily bypasses the sandbox, as the virus execution begins with a user interaction, such as starting the macros, but the VM environment doesn't perform any interactions with the infected document.
How to withstand sandbox-evading malware
As you can see, hackers are applying different sandbox evasion techniques to make their viruses undetectable in the sandbox. After infecting the victimized computer, this malware tries to understand its environment by doing the following:

  • looking for signs of virtual machine (ZeuS Panda)
  • detecting system files (GootKit)
  • waiting for user interactions (Locky, Kovter, Heodo)
  • beginning its execution in a specified time (QakBot Trojan)
  • obfuscating the system data (Grobios)

Sandbox technology is unable to detect environment-aware viruses and let them harm your computer. Thus, developers of security software should pay their attention to more progressive approaches of malware detection that are based on a customized sandbox environment, behavior analysis, machine learning, and others.

Conclusion

Sandbox-evading viruses are a new type of modern malware that can't be detected by traditional antivirus solutions. Computer users are now at a high risk to become a victim of cyber criminals as this malware is rapidly spreading across the Web. While users should follow the best cybersecurity practices, software developers should hurry up with the implementation of the latest technologies to improve their anti-malware solutions.

Jul 27, 2018

The Microsoft License Verification Process Scam

Oh man, oh man do I hate Microsoft! Not the software so much, I mean they do actually put out really good products. What I hate is their licensing rules, and how they make it so damned convoluted and confusing! On top of that, right after you've worked with your Microsoft Licensing re-seller to button up your licenses, you may periodically get contacted to participate in the Microsoft License Verification Process! Weeeeee!

I'm not sure what happened, but about two years ago was my first experience with this. We complied, and Microsoft came back and said we were out of compliance based on random changes they had made to their licensing since our last true-up with our re-seller, and we had to fork over about $30,000 that we didn't budget for to become compliant again.

To be fair, our previous re-sellers did give us some bad information about licenses, so after that audit we switched re-sellers.

Well, I just got picked again this year. In the 13 years I've worked in Information Technology, these last two years were the first time I'd ever seen this... And now I think I know why. It's basically a shady marketing tool!

I reached out to our new re-seller about this so called audit, and here is what they said:
We’ve run into this a lot recently and over the years. Their wording seems to hide the fact that you don’t have to do this. 
The emails starting with “v-“ are not Microsoft and they are not audits. They are voluntary, but the results are shared with Microsoft at which point you would be required to reconcile anything they find.  
If you want to do an engagement like this to assess your licensing, we can do it for you. We don’t share the results with Microsoft and just deliver them to you.
In their frequently asked questions, the people contacting me about this Microsoft Verification Process say this:


I asked my rep about that too and they said:
Man I don’t like that wording. “us” 
That v- in the email means that person doesn’t work for Microsoft, but is contracted. Microsoft allows this to happen, but it’s not really their employees. I see these all the time and we just ignore them unless you would like to do an engagement. 
Microsoft does audit occasionally, but this email is pretty threatening. Microsoft audits don’t come in email form, I’m 99% sure.
So long story short, if you are contacted about participating in a Microsoft License Verification and the people contacting you have a "v-" before their email address, you should ignore them and reach out to your re-seller instead. It's really just a ploy so Microsoft can increase their bottom line before your annual true-up!

Have you experienced one of these? Did you comply? Is my rep wrong? Let us know your story in the comments!


Jun 29, 2018

I've switched to Let's Encrypt for TLS encryption on my personal email server

Years ago I started using iRedmail for my personal email. I love it, and it's super easy to setup. Way back then I purchased a three year Comodo SSL certificate for it. Well that certificate expired, and it looks like none of the affordable SSL companies are offering three year certificates anymore... Bummer.

Oh, well. I figured why waste the money anyway when I could just get a free certificate from Let's Encrypt! The only issue I have with Let's Encrypt is that they only issue three month certificates. Apparently they think it's more secure that way. Here are the reasons they give from their blog:

  • They limit damage from key compromise and mis-issuance. Stolen keys and mis-issued certificates are valid for a shorter period of time.
  • They encourage automation, which is absolutely essential for ease-of-use. If we’re going to move the entire Web to HTTPS, we can’t continue to expect system administrators to manually handle renewals. Once issuance and renewal are automated, shorter lifetimes won’t be any less convenient than longer ones.

Well, they are right about one thing, the automated renewal process is pretty convenient. The only issue I had with it was that they recommend using Certbot for Linux based servers. When I followed this post (How To Secure Nginx with Let's Encrypt on Ubuntu 16.04) on how to install it, I got a bunch of errors and jacked up my Ubuntu based iRedmail server... (Thank God for backups!)

Anyway, there are much easier scripts and utilities around that can basically do the same thing. I opted for acme.sh! From their page:
  • An ACME protocol client written purely in Shell (Unix shell) language.
  • Full ACME protocol implementation.
  • Support ACME v1 and ACME v2
  • Support ACME v2 wildcard certs
  • Simple, powerful and very easy to use. You only need 3 minutes to learn it.
  • Bash, dash and sh compatible.
  • Simplest shell script for Let's Encrypt free certificate client.
  • Purely written in Shell with no dependencies on python or the official Let's Encrypt client.
  • Just one script to issue, renew and install your certificates automatically.
  • DOES NOT require root/sudoer access.
  • Docker friendly
  • IPv6 support
  • It's probably the easiest & smartest shell script to automatically issue & renew the free certificates from Let's Encrypt.
Installation was easy, and so was requesting my first certificate. A part of the install process is that it creates a cron job to automatically renew your certificates. The one modification I had to do was to create a script with the following to copy the new certs from the default location in the installer user's home directory to the directory where I keep my certificates and keys:

 #!/bin/bash  
 cd ~/.acme.sh/domainname.com/  
 yes | cp -rf *.cer /pathto/ssl/certs/  
 yes | cp -rf *.key /pathto/ssl/private/  
 service apache2 restart  
 service dovecot restart  
 service postfix restart  

After that, I created a cron job to run that script nightly since their renewal script runs twice a day. Boom, done! Now I shouldn't have to worry about SSL certificates on this server for a very long time, or until I built my next one.

Do you use Let's Encrypt on your servers? Do you like it? Why or why not? Let us know in the comments!

Jun 14, 2018

Script To Configure Your Azure Application Gateway For TLS 1.2 Only

If you are just reading this post, you are cutting things pretty close with PCI/DSS compliance! After all, you have until the end of the month to remove older versions of TLS to remain PCI compliant.

Well, if you are using Application Gateways in Azure to secure your web servers, you're in luck, because setting a custom SSL policy is pretty easy. You just have to do it via PowerShell.

Now, this script assumes you've already created your Application Gateway. If you are trying to configure one from scratch, you'll have to keep Googling my friend... Sorry.

Before you can run your script, you must first connect to Azure via PowerShell, and select your subscription.

  • Connect-AzureRmAccount
  • Select-AzureRmsubscription -SubscriptionName "<Subscription name>"

After that, you can copy and paste the below script to set your custom SSL policy. Be sure to replace the Application Gateway Name and the Resource Group Name to match your environment.

Here's the script:

 # get an application gateway resource  
 $gw= Get-AzureRmApplicationGateway -Name <Application Gateway Name> -ResourceGroup <Resource Group Name>  
 # set the SSL policy on the application gateway  
 Set-AzureRmApplicationGatewaySslPolicy -ApplicationGateway $gw -PolicyType Custom -MinProtocolVersion TLSv1_2 -CipherSuite "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA256"  
 # validate the SSL policy locally  
 Get-AzureRmApplicationGatewaySslPolicy -ApplicationGateway $gw  
 # update the gateway with validated SSL policy  
 Set-AzureRmApplicationGateway -ApplicationGateway $gw  

After that, your Application Gateway will only support TLS 1.2, and will use the following ciphers in order:
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
Pretty easy right? Did this help you out? Let us know in the comments!

May 24, 2018

A faster and easier way to make LUN files for your SCST SAN

I've been writing a lot lately about SCST iSCSI SANs again. It's been a few years since I've had a chance to configure one of these from scratch, and a lot has changed since 2012 when I first started using these.

In the past I've always used dd to create LUN files for use with SCST. For thin provisioned LUNs I would run something like the following:
sudo dd if=/dev/zero of=lun1 bs=1 count=0 seek=1T
For thick provisioned LUNs I would run this instead:
 sudo sudo dd if=/dev/zero of=lun1 bs=1024 count=1T seek=1T
Well, I found two utilities that do the same thing, but they are way faster and the syntax is way easier! One is called fallocate and the other is called truncate!

To create a thick provisioned LUN, you would use fallocate to create your file by running:
sudo fallocate -l 1T lun1
To create a thin provisioned LUN, you would use truncate to create your file by running:
truncate -s 1T lun1
So simple right? Why am I just learning about this now!?

May 23, 2018

How to specify a thin provisioned LUN in SCST

The other day I wrote about how to install SCST 3.4.0 on Ubuntu 18.04. If you are not familiar with SCST, it is basically a SAN target software that you can run on Linux so you can build your own low cost SAN storage. I've been using it for years, and just recently I've started to learn a few new things about it.

For instance, I used to think that for thin provisioning, all you had to do was to create a thin provisioned disk file to present as a LUN. To do that you just run the following:
sudo dd if=/dev/zero of=lun1 bs=1 count=0 seek=1T
The above creates a thinly provisioned 1TB LUN file called lun1. Simple right?

Well, this is great and all, but if you want to use features like TRIM or UNMAP to reclaim disk space, you also need to tell SCST that this LUN file is a thin provisioned LUN. To do that, you need to add the thin_provisioned parameter to the device section of your /etc/scst.conf file. See below for an example:

 HANDLER vdisk_fileio {  
     DEVICE lun1 {  
         filename /data/lun1  
         nv_cache 1  
         thin_provisioned 1  
     }  
 }  
 TARGET_DRIVER iscsi {  
     enabled 1  
     TARGET iqn.2018-05.bauer-power.net:iscsi.lun1 {  
         enabled 1  
         rel_tgt_id 1  
         GROUP VMWARE {  
             LUN 0 lun1  
             INITIATOR iqn.2018-05.com.vmware1:8bfdfcd0  
         }  
     }         
 }  

After making this change, you can either restart the scst daemon, or reboot your SAN. If you can't reboot the SAN you will have to actually remove the LUN on the fly to make this change. To do that you have to do the following:
  • sudo scstadmin -rem_lun 0 -driver iscsi -target iqn.2018-05.bauer-power.net:iscsi.lun1 -group VMWARE
  • sudo scstadmin -close_dev lun1 -handler vdisk_fileio
  • sudo scstadmin -open_dev lun1 -handler vdisk_fileio -attributes filename=/data/lun1 thin_provisioned=1 
  • sudo scstadmin -add_lun 0 -driver iscsi -target iqn.2018-05.bauer-power.net:iscsi.lun1 -group VMWARE -device lun1
Obviously, you need to change the lun names, file names and target names to match your environment. Special thanks to Gilbert Standen from the Scst-devel mailing list for the above steps on making this change on the fly! Check out his blog here: (brandydandyoracle)

There are a lot of parameters you can add to your config file as well. Here's a list from SCST's Source Forge page:

  - filename - contains path and file name of the backend file.  
  - blocksize - contains block size used by this virtual device.  
  - write_through - contains status of write back caching of this virtual  
   device.  
  - read_only - contains read only status of this virtual device.  
  - o_direct - contains O_DIRECT status of this virtual device.  
  - nv_cache - contains NV_CACHE status of this virtual device.  
  - thin_provisioned - contains thin provisioning status of this virtual  
   device.  
  - removable - contains removable status of this virtual device.  
  - rotational - contains rotational status of this virtual device.  
  - size_mb - contains size of this virtual device in MB.  
  - t10_dev_id - contains and allows to set T10 vendor specific  
   identifier for Device Identification VPD page (0x83) of INQUIRY data.  
   By default VDISK handler always generates t10_dev_id for every new  
   created device at creation time based on the device name and  
   scst_vdisk_ID scst_vdisk.ko module parameter (see below).  
  - usn - contains the virtual device's serial number of INQUIRY data. It  
   is created at the device creation time based on the device name and  
   scst_vdisk_ID scst_vdisk.ko module parameter (see below).  
  - type - contains SCSI type of this virtual device.  
  - resync_size - write only attribute, which makes vdisk_fileio to  
   rescan size of the backend file. It is useful if you changed it, for  
   instance, if you resized it.  

Pretty cool right? Let us know what you think in the comments!

May 18, 2018

How to install SCST 3.4.0 in Ubuntu 18.04

Well crap. The other day I talked about how I re-configured one of my Bauer-Power iSCSI SANs using tgt. It was an easy setup, but once I started using it I noticed that tgt performed like shit. CPU's were spiking like crazy on the SAN itself, and when I was backing stuff up I couldn't access the drive on the backup server. It would get completely unresponsive!

I decided I had to go back to SCST. Luckily installing it is way easier than it used to be. To install version 3.4.0 now just do the following:
  • Create an empty working directory
 rm -rf ~/scst-build  
 mkdir ~/scst-build  
 cd ~/scst-build  
  • Install dependencies
 sudo apt install git devscripts equivs dkms 
 git clone -b ubuntu-3.4.x https://github.com/ubuntu-pkg/scst.git  
 cd scst  
 sudo mk-build-deps -i -r  
  • Build the package
 dpkg-buildpackage -b -uc  
  • Pre-install, create two directories (For some reason the deb packages don't do it...)
 sudo mkdir -p /var/lib/scst/pr  
 sudo mkdir -p /var/lib/scst/vdev_mode_pages  
  • Install
 sudo dpkg -i ../scst-dkms_*deb  
 sudo dpkg -i ../iscsi-scst_*.deb  
 sudo dpkg -i ../scstadmin_*deb  

Now you just have to configure your LUN using the instructions in my tgt post, and configure your /etc/scst.conf file using my old SCST post. Once those are done restart the scst service.

 sudo service scst restart  

Of course, if you don't want to mess with all of the above stuff, you could just download my pre-packaged scst 3.4.0 deb files for Ubuntu 18.04 and run my install script...

 cd ~  
 wget https://mail.bauer-power.net/drop/scst/scst-3.4.0-Ubuntu.tgz  
 tar -xzvf scst-3.4.0-Ubuntu.tgz  
 cd scst*  
 sudo chmod +x install.sh  
 sudo ./install.sh  

Now just setup your LUN files, create your /etc/scst.conf file and run the following commands:

  • modprobe scst 
  • modprobe scst_vdisk 
  • modprobe iscsi-scst 
  • iscsi-scstd 
  • scstadmin -set_drv_attr iscsi -attributes enabled=1 
  • scstadmin -config /etc/scst.conf
  • update-rc.d scst defaults
  • /etc/init.d/scst restart

Boom! Now you are off to the races!

May 17, 2018

How to Re-IP An OSSEC Agent

At my day job we use OSSEC for host based intrusion detection. It works great! It does all sorts of things from verifying registry integrity, checking files for changes, reading security logs etc., and sends email alerts for anything out of the ordinary.

Well, we're in the process of migrating servers from on-premise to Azure, so that means that some of our servers are getting new IP addresses. Googling around, I didn't find a good way to re-IP the agents except to remove them, and re-add them. I didn't want to do that.

It turns out, there is an easier way. All you have to do is edit /var/ossec/etc/client.keys with your favorite text editor and modify the IP address of the client you want to change. If you don't want to deal with this in the future, you can replace the IP address with 'any' so that OSSEC will accept connections from that client as long as the hostname and the client key match.

After you make your change, restart the OSSEC daemon on your OSSEC server:
sudo service ossec restart
Re-run /var/ossec/bin/manage_agents and extract the key again for the agent you want to update. Then on the client, open OSSEC Agent Manager as an administrator, click Manage > Stop OSSEC, re-paste the key, click Save, then restart OSSEC by clicking Manage > Start OSSEC.

Boom! Done! You should now be able to connect using the new IP address or 'any'.

May 16, 2018

Bauer-Power SAN 3.0

NOTE: Please read my post about installing SCST on Ubuntu 18.04 first...

Many moons ago I wrote about how to configure an Ubuntu Linux based iSCSI SAN. The first iteration used iSCSITarget as the iSCSI solution. The problem with that is that it didn't support SCSI-3 Persistent Reservations. That means it wouldn't work for Windows failover clustering, and you would probably see issues if you were trying to use it in VMWare, XenServer or Hyper-V.

The second iteration used SCST as the iSCSI solution, and that did work pretty well, but you had to compile it from source and the config file was kind of a pain in the ass. Still though, it did support SCSI-3 Persistent Reservations, and was VMWare ready. It's the solution I've been using sing 2012 and it's worked out pretty well.

Well the other day I decided to rebuild one of the original units I setup from scratch. The first two units I did this setup on were SuperMicro SC826TQ's with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives. This sucker gave me about 18TB of usable backup storage after I configured the 12 disks in RAID 6.

This time I used Ubuntu 18.04 server because unlike the first time I did this, the latest versions of Ubuntu have native drivers for 3Ware controllers. On top of that, the latest versions of Ubuntu have the iSCSI software I wanted to use in the repositories... More on that later.

I partitioned my disk as follows:

Device Mount Point Format Size
/dev/sda1 N/A bios/boot 1MB
/dev/sda2 / ext4 10GB
/dev/sda3 N/A swap 4GB
/dev/sda4 /data xfs 18TB

After Ubuntu was installed I needed to setup my network team. Ubuntu 18.04 uses Netplan for network configuration now, which means that NIC bonding or teaming is built in. In order to setup bonding or teaming you just need to modify your /etc/netplan/50-cloud-init.yaml file. Here is an example of how I setup my file to team the four NICs I had, as well as use MTU 9000 for jumbo frames:


network:
    version: 2
    ethernets:
        enp6s0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp7s0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp1s0f0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp1s0f1:
            dhcp4: no
            dhcp6: no
            mtu: 9000
    bonds:
        bond0:
            interfaces: [enp6s0, enp7s0, enp1s0f0, enp1s0f1]
            mtu: 9000
            addresses: [100.100.10.15/24]
            gateway4: 100.100.10.1
            parameters:
                mode: balance-rr
            nameservers:
                addresses: [8.8.8.8, 8.8.4.4]


It's important to note that Netplan is picky about indentation. You must have everything properly indented or you will get errors. If you copy the above config, and modify it for your server, you should be fine though.

After setting up my bonded network, I installed my software. I opted to use tgt this time. If you are unfamiliar with it, it's apparently a re-write of iscsitarget, but it supports SCSI-3 Persistent Reservations. I tested it myself using a Windows Failover Cluster Validation test:



Boom! We're in business!

To install tgt simply run the following:
sudo apt-get install tgt
 After installing you will want to create a LUN file in /data. To create a thin provisioned disk run the following:
sudo dd if=/dev/zero of=/data/lun1 bs=1 count=0 seek=1T
This creates a 1TB thinly provisioned file in /data called lun1 that you can present to iSCSI initiators as a disk. If you want to create a thick provisioned disk simply run:
sudo dd if=/dev/zero of=/data/lun1 bs=1024 count=1T seek=1T
Once you have your LUN file, you will want to create a config file for your LUN. You can create separate config files for each LUN you want to make in /etc/tgt/conf.d. Just append .conf at the end of the file name and tgt will see it when the service restarts. For our purposes, I created one called lun1.conf and added the following:

<target iqn.2018-05.bauer-power.net:iscsi.lun1>
        backing-store /data/lun1
        write-cache off
        vendor_id www.Bauer-Power.net
        initiator-address 100.100.10.148
</target>


The above creates an iSCSI target and restricts access to it to only 100.100.10.148. You can also use initiator-name to restrict access to particular iSCSI initiators, or you can use incominguser to specify chap authentication. You can also use a combination of all three if you want. Restricting by IP works for me though.

I also opted to disable write-cache because with it enabled I noticed that tgt was pegging my RAM. On top of that, my RAID controller handles write-cache on it's own, so it actually helped my performance to disable it.

All of this being said, you can find lots of configuration options here: (tgt Config Options)

After you have your file created, all you have to do is restart the tgt daemon and you're ready to serve up your iSCSI LUN!
sudo service tgt restart
After you restart, you can see your active LUNs by running:
sudo tgtadm --op show --mode target
You can also create LUNs on the fly without restarting tgt. This is handy if you need to add a LUN and you don't want to mess up connections to LUNs you've already created. To do that, create your LUN file like you did before. Obviously, name it something new like lun2.

Next,make sure to note what LUNs you already have running by running this command:
sudo tgtadm --op show --mode target
Target 1 = tid 1, Target 2 = tid 2 and so on and so forth. If you only have one target, then your next target will be tid 2. Assuming that, and assuming your new LUN file is called lun2 you would run:

sudo tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2018-05.bauer-power.net:iscsi.lun2
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /data/lun2
sudo tgtadm --lld iscsi --op bind --mode target --tid 2 --initiator-address 100.100.10.148

This will create a target, and will be available only to 100.100.10.148. If you wanted to allow other IP's re-run that last line for each IP address you want to allow.

Now if you want to have this LUN persist after a reboot, you can either manually create a conf file in /etc/tgt/conf.d/ or you can run the following to automatically create one for you:
tgt-admin --dump | sudo tee /etc/tgt/conf.d/lun2.conf
The only issue with the above is that it dumps all running target information in you new file. You will have to go in there and remove the other targets. In this case, it's just better to manually create the config file... but that's just me. Also, that is not a typo... tgt-admin is a different tool than tgtadm... Weird right?

Anyway, this setup is way easier than SCST ever was. I'm looking forward to replacing all of my SCST SANs with tgt in the upcoming months.

It's important to note that using the above hardware is not going to give you high performance. It's suitable for backup storage, and that's about it. If you want to run VMs or databases, I'd recommend getting 10GBe switches for use in iSCSI. You can get one fairly cheap here (10GBe switches). If you get 10GB switches, you will need a 10GB NIC as well. You can get one here (10GB NICs). Finally you will need faster disks. You can get 15K RPM SAS disks here (15K RPM SAS).

What do you think about this setup? Are you going to try it out? Let us know in the comments!





Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | stopping spam