Bad Tech Habits to Quit in 2018

With 2018 promising to provide leaps and bounds in the technological field, many current practices should get the boot. Here is a list of the top habits that the computer-savvy individual should attempt to quit in this new year.

Too much smartphone use means less personal interactionsSocial Networking

Although often touted as the easiest and most efficient way to communicate, sites like Facebook & Twitter are all addicting their users to online socializing. The downside of this is the decrease in social skills that our society used to take for granted. These sites also prove to be a giant time suck. In fact, I terminated my facebook account, and with all the spare time I had, I can do other productive things. All the time modern society wastes can be better used for productive purposes, and we can all take the first steps to end this grotesque waste by terminating your social networking accounts.

Online Gaming

In the same vein as social networking, online gaming is often responsible for taking away a lot more of one’s time than would be desired. Thousands of hours of time are wasted in excess online gaming that has gone beyond entertainment and has entered the realm of obsession. Online games also tend to serve as a substitute for a normal social life, which can make it hard for individuals to meet other people without a virtual sword in their hand.

Just like smoking and fatty foods, many technological addictions should be kicked out of our system, and there is no better time than the present. We have entered into a new year, and it is an impeccable time to refine our lives and create a new set of online behaviors that will redefine what it means to spend time online. We should start becoming more productive with technology, and not allow ourselves to become slaves to the virtual world.

Social Media Trends to Watch Out for 2018

You know when people say it is terrible to follow fashion trends? Trend watching may not apply to fashion, but it sure works for online and social media marketing. The reason for this is online algorithms change all the time and what works now may not work next month. So, it is then crucial for you to keep in terms with trends, so you don’t get left behind.

ocial media influencers took the stage in 2017The year 2017 has taught us a lot of lessons about social media marketing, and one of these is the importance of visuals. Before, texts were sufficient to advertise your brand. These days, though, compelling videos and photos are essential to make people take notice of your company. The new generation relies on visuals more than texts, but there is also an increasing trend of previous generations opting to watch than to read.

Another lesson we’ve learned in 2017 is the importance of social media influencers. This year has said their semi-goodbyes to traditional advertising where company-driven messages are live-streamed for consumers to see. It was observed that influencers play a more crucial role since they are real people whom social media users get to interact with on a daily basis. Therefore, what they talk about are from their own experiences, and whether this experience is authentic now lies in the judgment of the post viewer.

What social media trends should we watch out for 2018? Here are some of what you must prepare for:

1. The even greater importance of social media influencers

We mentioned that social media influencers have become popular in 2017; however, this trend is expected to rise in 2018. It is because since companies have only started exploring on this group of marketers, there will be more room to grow this coming year. Other platforms will also be tested including Instagram and Snapchat stories. So, if you are looking into leveraging your brand, you want to consider partnering up with a social media influencer who fits the principles of your brand.

Not surprisingly, even big companies such as Rolex and Philip Stein are now using social media influencers to market their brands further.

Live streaming breaks the barriers of time and convenience2. The rise of live streaming for marketing

This year, we’ve seen high-quality videos and photos being published for Manhattan & New York SEO marketing purposes. This 2018, we expect to see the rise of live streaming to further market the brand. The reason for this is that most people now want to look at what’s behind the curtains, or what goes behind the scenes. It is an excellent way to better engage with your audience since they get to get a glimpse of how you operate as a company. It is going to be staged for sure, but the key is to make it as close to reality as possible.

Trend watching is crucial if you want to market your brand online. You can follow other companies and try to figure out their strategies so you can see their strong and weak points. It is good to expose yourself to as many examples as possible so you can better understand the direction you are taking. And remember, what used to work for 2017 may not work for 2018. So plan your social media marketing strategies well ahead of time.

Passive Measurement and Analysis (PMA) Project Data Tools

The current PMA measurement design, which utilizes a collection of independent monitors, includes more than 21 passive monitors. These monitors take regular measurements on a variety of networks, ranging from FDDI to OC12 and from ATM to POS. Traces are collected, post-processed, encoded (anonymized), and then made publicly available on the NLANR PMA website for use by other network researchers, as well as students. The passive header trace data provides the means to study workload profiles for a number of measurement points in high speed environments, soon up to OC48, with work under way for OC192 packet tracing capability.

Monitoring a variety of networksWe have developed a number of tools to handle this data, all of which are available on the website. The tools can be classified as analysis or selection tools.

Analysis tools are usually programs which take our traces, or associated data, and either generate the data in a different format, or produce a human readable presentation of statistical data gathered from the traces.

Selection tools are predominantly Web based and are intended to assist researchers in finding suitable traces for their analysis needs.

To improve performance, all of the analysis tools are being rewritten in C (previously available only as Perl scripts). Three scripts have been rewritten successfully as of March 2002, with more underway. The initial performance increases have been quite promising. The tsh2ta program is now executing as much as 10 to 30 times faster than the equivalent Perl script (using equivalent hardware): 30 times faster on large traces, small traces are now 15- to 20-fold faster, with most runs experiencing a 10-fold increase. The reimplementation of ta2sum has resulted in a three- to four-fold increase in performance; while modest when compared to the gain in efficiency for tsh2ta, this is still significant.

Performance Comparison: vs. tsh2ta (C Implementation)

Motivation: The need for improvement

Every three hours a trace is taken at each of 21+ monitors. The current methodology gives us a ninety-second trace, which is then followed by three hours during which our post-processing, analysis, and retrieval may be done.

The ratio of three hours processing time to ninety seconds of trace time may seem like plenty, though as link usage continues to grow. And as the links themselves grew faster, we begin to run into time constraints which may lead to corrupted analyses, corrupted traces, or outright loss of a trace.

On top of our current problems, we are constantly receiving requests for more long traces, on the order of hours instead of minutes. There have also toyed with the possibility for some real-time analyses or continuous traces. Our current tools and methods are not even close to being able to run in real time for any substantial amount of traffic.

Strategy: Reimplementation

Reimplementing updatesAll analyses were previously done in Perl. While Perl is a fine language for prototyping and cross-platform compatibility, it’s efficiency as an interpreted language is less than optimal. Since we do not wish to change our methodology in any significant way, we must improve the performance of our existing methodologies. This can be accomplished either by enhancing the runtime of the analyses on current resources or by improving the resources themselves.

Our resources are many, varying in performance, and cost-prohibitive to update. Also, in resources deployed in the future, even given the improvements in quality and speed of the architectures deployed, we will still face analysis bottlenecks. An improvement in the software seems justified in this situation. I chose to rewrite all of the tools used on passive monitors for analyses, starting with the most time-consuming ones. The prominent place to start was the tsh2ta Perl script, which is a preliminary analysis required for all other analyses and has been known to fail due to resource exhaustion.

Implementation problems

One of the luxuries afforded by Perl is its implicit hash structure. Reimplementing a program which relies on hash’s efficiency is very difficult in C. Especially when adequate, free, standard, and efficient abstract hashtables do not seem to exist for C. After trying out several alternatives, including glib, publib, the possibility of writing my own, and a chained hash implementation by Jeff Brown.

After wading through the API for glib, I discounted it as being too complex for our purposes. Publib looked more promising, but consultations with Jeff lead me away from publib due to implementation issues and the possibility of performance limitations with the datasets I planned to be working with. Eventually, I decided to use Jeff’s chash and asked him to tailor it in a few aspects so that it would suit my application.

My last problem with regards to hash tables was in iterating over them. I found that deleting records from a hash while iterating through hash elements can cause bizarre and unexpected results. I managed a workaround for this problem using a global list.

Initial Performance Findings

My first test was on a huge trace taken at AIX which had caused the Perl script tsh2ta implementation to lock up. The initial results were very promising; the C implementation finished in about 10 minutes where the Perl script on the same machine and load was killed prematurely after approximately 300 minutes.