Montag, 26. Oktober 2015

Making decision: hosting of javascript libraries on external CDN or locally

whether to host javascript libraries locally or at external javascript cdnThe benefits of hosting common JavaScript libraries externally at JavaScript CDN are well known - i list them just in short: external hosting of JavaScript libraries
  • reduces own server's traffic
  • makes parallel loading possible
The first question is: where to host? Directly on library vendor's site? Or somewhere else? There are mainly two factors, which let us make a decision, where we host our JavaScript libraries:
  • Speed
  • Popularity
The second question is more general: whether to host externally? Or locally?
Lets look on some details, which help to choose an optimal public hosting of common JavaScript libraries.

tl;dr;
  • If you doubtlessly want to host JavaScript libraries externally, host them at Google Hosted Libraries - so you get highest chance, that your visitors have them already in chance and don't need to download them again.
  • If you doubt, host better your JavaScript libraries locally - visitors, who have JavaScript libraries in cache are very few, and the average visitor gets the data on fastest way.

Montag, 27. Juli 2015

After-Panda SEO for intermediary businesses

After Panda SEO for intermediary business
What we definitely know about Phantom and Panda Updates:
  • Phantom and Panda updates are about onpage quality, whatever it might mean;)
  • Duplicated content fraction of a given page is one of the most important factors, which rank a page down
  • Duplicated content can be easy measured

SERPs disintermediation - Google battles intermediaries

There is a Federal trade Comission's report, about how Google misuses its prevalence to kick off intermediary players from some search verticals, like comparison sites or aggregated shopping, where sites sell products from different manufacturers.

Google means, intermediary businesses are poachers and steal Google's money. Google means, SERP is the place for users to see original manufacturers or direct service provider. SERP should be not the place for intermediary services, cause they are secondary. And the sign of secondarity is easy to measure: it is the fact of presence and the proportion of the duplicated content.

The intermediary job to compare and to aggregate stuff and user voice would belong only to Google, because only Google were good, honest, and, last, but not least, it doesn't offer duplicated content - it's just a search engine, not?
Google is a strong rival, playing by own rules. But do you still want to survive this battle?

Mittwoch, 11. Februar 2015

SEOtools for Excel: solutions for loosing installation folder and disappeared ribbon

My installation of SEOTools for Excel on Windows7 x64 / Excel x32 didn't want to cooperate with me from the first step. As first Excel refuses to open seotools.xll properly - it thought always it would be a text file. Then, after try to install SEOTools x64 version as addin, it wasn't not visible in the ribbon at all, but doesn't want to be deinstalled. I was forced to delete it on the hard way. Then, on trying to install SEOTools x32 version, i was pretty near success: i got the strat splash screen from SEOTools, but then an error alert raised, The Ribbon/COM Add-in helper required by add-in SeoTools could not be registered. This is an unexpected error. Error message: Exception has been thrown by the target of an invocation. And nothing more.

After some investigations it becomes clear, that the problem is in the not corresponding versions of the machine (x64), Win7 (x64) and Excel 15 (x32). BTW. if you need to get to know, what is installed on your machine - here are all the places listed, where you get needful information about your hardware, OS and Excel.

Dienstag, 13. Januar 2015

SEO query string universal solution

universal solution for seo troubles caused by urls with query strings
URLs with query strings can be a real poison for a SEO. The main and mostly harmful damage untreated URLs with string query do, is a not calculable rise of amount of existing URLs with same content, HTTP answer code 200 and untreated indexing management, also called duplicated content. Another issue caused by query strings in URLs is overspending of crawl budget to URLs with query strings, which must be better excluded from crawling and indexing.

On this way a site with untreated URLs with query strings gets on the one side such URLs into index, which don't belong here, on the other side the crawling budget for good URLs could be missed, cause overspend.

There are some passive techniques to deal with query strings in URLs. Actually i planned to publish existing techniques for dealing with query strings in URLs and my solution for SEO problems caused by query strings in URL into my ultimate htaccess SEO tutorial, but then this topic got some more details, so i decided to create an extra article about query strings in URL and SEO.

Preceding kinds of SEO dealing with query strings in URLs

  • while Google means, it could deal with query strings in URLs, it recommends to adjust the bot's settings in Webmaster Tools for each of existing query strings.
  • URLs with query strings could be disallowed in the robots.txt with a rule like
    Disallow: /?*
    Disallow: /*?
    
  • If header information of HTML or PHP files, available with URLs with query strings, can be edited, so it is possible to add rules for indexing management and URL canonicalisation, like
    <meta name="robots" content="noindex, nofollow">
    <link href="Current URL, but without query string" rel="canonical">
    
These methods are mainly manual, require unpredictable workload and solute problems partly. But the good news is: i have an universal solution, working for all URLs with query strings and getting rid of all SEO troubles caused by query strings in URLs.

Freitag, 19. Dezember 2014

Freebase shuts down! Your free Google-proof way into Knowledge Graph closes at 31.03.2015

Freebase closes
Everybody, who tried it, knows, whether it is possible and easy to publish an article about a business in the Wikipedia, specially if the business is far away from being in Fortune500, or Fortune 5k, or even Fortune 500k;) But small, local businesses, like individual entrepreneurs too, have an absolutely legit wish and need to get own websites into the Knowledge Graph...

The only free, public way to create a Google-proof web entity is (not much longer) a Freebase entry. Well, smart people create at least 2 entries simultaneously: first for the person, and second for the business entity, with the person entity as an author for the business entity. I wrote an article about an entity creation with a Freebase entry, which was highly spread and liked. But today is a day for the bad news:

Freebase will close in the nearest time!

And, from 31.03.2015 on Freebase will be available only in read-only status: no more new entries, no more edits. Freebase database will be then integrated into the Wikidata. I relate with it to the yesterday post from Freebase Google Plus account, where the closing and integration road map is detailedly described. In the mid 2015 Freebase will be shut down as the standalone project. But what means putting Freebase out of service for all of us, who wants appear in the Knowledge Graph, but haven't enough mana to appear in the Wikipedia?
Yandex.Metrica