Connect with us

SEO

8 Useful Python Libraries for SEO & How To Use Them

Published

on

8 Useful Python Libraries for SEO & How To Use Them


Editor’s note: As 2021 winds down, we’re celebrating with a 12 Days of Christmas Countdown of the most popular, helpful expert articles on Search Engine Journal this year.

This collection was curated by our editorial team based on each article’s performance, utility, quality, and the value created for you, our readers.

Each day until December 24th, we’ll repost one of the best columns of the year, starting at No. 12 and counting down to No. 1. Our countdown starts today with our No. 3 column, which was originally published on March 18, 2021.

Ruth Everett’s article on utilizing Python libraries for automating and accomplishing SEO tasks makes a marketer’s work so much easier. It’s very easy to read and perfect for beginners and even more experienced SEO professionals that want to use Python more.  

Great work on this, Ruth, and we really appreciate your contributions to Search Engine Journal.

Enjoy!   


Python libraries are a fun and accessible way to get started with learning and using Python for SEO.

Advertisement

Continue Reading Below

A Python library is a collection of useful functions and code that allow you to complete a number of tasks without needing to write the code from scratch.

There are over 100,000 libraries available to use in Python, which can be used for functions from data analysis to creating video games.

In this article, you’ll find several different libraries I have used for completing SEO projects and tasks. All of them are beginner-friendly and you’ll find plenty of documentation and resources to help you get started.

Why Are Python Libraries Useful For SEO?

Each Python library contains functions and variables of all types (arrays, dictionaries, objects, etc.) which can be used to perform different tasks.

For SEO, for example, they can be used to automate certain things, predict outcomes, and provide intelligent insights.

It is possible to work with just vanilla Python, but libraries can be used to make tasks much easier and quicker to write and complete.

Python Libraries For SEO Tasks

There are a number of useful Python libraries for SEO tasks including data analysis, web scraping, and visualizing insights.

Advertisement

Continue Reading Below

This is not an exhaustive list, but these are the libraries I find myself using the most for SEO purposes.

Pandas

Pandas is a Python library used for working with table data. It allows for high-level data manipulation where the key data structure is a DataFrame.

DataFrames are similar to Excel spreadsheets, however, they are not limited to row and byte limits and are also much faster and more efficient.

The best way to get started with Pandas is to take a simple CSV of data (a crawl of your website, for example) and save this within Python as a DataFrame.

Once you have this stored in Python, you can perform a number of different analysis tasks including aggregating, pivoting, and cleaning data.

For example, if I have a complete crawl of my website and want to extract only those pages that are indexable, I will use a built-in Pandas function to include only those URLs in my DataFrame.

import pandas as pd 
df = pd.read_csv('/Users/rutheverett/Documents/Folder/file_name.csv')
df.head
indexable = df[(df.indexable == True)]
indexable

Requests

The next library is called Requests and is used to make HTTP requests in Python.

Requests uses different request methods such as GET and POST to make a request, with the results being stored in Python.

One example of this in action is a simple GET request of URL, this will print out the status code of a page:

import requests
response = requests.get('https://www.deepcrawl.com') print(response)

You can then use this result to create a decision-making function, where a 200 status code means the page is available but a 404 means the page is not found.

if response.status_code == 200:
    print('Success!')
elif response.status_code == 404:
    print('Not Found.')

You can also use different requests such as headers, which display useful information about the page like the content type or how long it took to cache the response.

headers = response.headers
print(headers)

response.headers['Content-Type']

There is also the ability to simulate a specific user agent, such as Googlebot, in order to extract the response this specific bot will see when crawling the page.

headers = {'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)'} ua_response = requests.get('https://www.deepcrawl.com/', headers=headers) print(ua_response)

Beautiful Soup

Beautiful Soup is a library used to extract data from HTML and XML files.

Advertisement

Continue Reading Below

Fun fact: The BeautifulSoup library was actually named after the poem from Alice’s Adventures in Wonderland by Lewis Carroll.

As a library, BeautifulSoup is used to make sense of web files and is most often used for web scraping, as it can transform an HTML document into different Python objects.

For example, you can take a URL and use Beautiful Soup together with the Requests library to extract the title of the page.

from bs4 import BeautifulSoup 
import requests
url="https://www.deepcrawl.com" 
req = requests.get(url) 
soup = BeautifulSoup(req.text, "html.parser")
title = soup.title print(title)

Beautiful Soup Title

Additionally, using the find_all method, BeautifulSoup enables you to extract certain elements from a page, such as all a href links on the page:

Advertisement

Continue Reading Below

url="https://www.deepcrawl.com/knowledge/technical-seo-library/" 
req = requests.get(url) 
soup = BeautifulSoup(req.text, "html.parser")

for link in soup.find_all('a'): 
    print(link.get('href'))

Beautiful Soup All Links

Putting Them Together

These three libraries can also be used together, with Requests used to make the HTTP request to the page we would like to use BeautifulSoup to extract information from.

We can then transform that raw data into a Pandas DataFrame to perform further analysis.

URL = 'https://www.deepcrawl.com/blog/'
req = requests.get(url)
soup = BeautifulSoup(req.text, "html.parser")

links = soup.find_all('a')

df = pd.DataFrame({'links':links})
df

Matplotlib And Seaborn

Matplotlib and Seaborn are two Python libraries used for creating visualizations.

Matplotlib allows you to create a number of different data visualizations such as bar charts, line graphs, histograms, and even heatmaps.

Advertisement

Continue Reading Below

For example, if I wanted to take some Google Trends data to display the queries with the most popularity over a period of 30 days, I could create a bar chart in Matplotlib to visualize all of these.

Matplotlib Bar Graph

Seaborn, which is built upon Matplotlib, provides even more visualization patterns such as scatterplots, box plots, and violin plots in addition to line and bar graphs.

It differs slightly from Matplotlib as it uses fewer syntax and has built-in default themes.

Advertisement

Continue Reading Below

One way I’ve used Seaborn is to create line graphs in order to visualize log file hits to certain segments of a website over time.

Matplotlib Line Graph

sns.lineplot(x = "month", y = "log_requests_total", hue="category", data=pivot_status)
plt.show()

This particular example takes data from a pivot table, which I was able to create in Python using the Pandas library, and is another way these libraries work together to create an easy-to-understand picture from the data.

Advertools

Advertools is a library created by Elias Dabbas that can be used to help manage, understand, and make decisions based on the data we have as SEO professionals and digital marketers.

Advertisement

Continue Reading Below

Sitemap Analysis

This library allows you to perform a number of different tasks such as downloading, parsing, and analyzing XML Sitemaps to extract patterns or analyze how often content is added or changed.

Robots.txt Analysis

Another interesting thing you can do with this library is to use a function to extract a website’s robots.txt into a DataFrame, in order to easily understand and analyze the rules set.

You can also run a test within the library in order to check whether a particular user-agent is able to fetch certain URLs or folder paths.

URL Analysis

Advertools also enables you to parse and analyze URLs in order to extract information and better understand analytics, SERP, and crawl data for certain sets of URLs.

You can also split URLs using the library to determine things such as the HTTP scheme being used, the main path, additional parameters, and query strings.

Selenium

Selenium is a Python library that is generally used for automation purposes. The most common use case is testing web applications.

Advertisement

Continue Reading Below

One popular example of Selenium automating a flow is a script that opens a browser and performs a number of different steps in a defined sequence such as filling in forms or clicking certain buttons.

Selenium employs the same principle as is used in the Requests library that we covered earlier.

However, it will not only send the request and wait for the response but also render the webpage that is being requested.

To get started with Selenium, you will need a WebDriver in order to make the interactions with the browser.

Each browser has its own WebDriver; Chrome has ChromeDriver and Firefox has GeckoDriver, for example.

These are easy to download and set up with your Python code. Here is a useful article explaining the setup process, with an example project.

Scrapy

The final library I wanted to cover in this article is Scrapy.

While we can use the Requests module to crawl and extract internal data from a webpage, in order to pass that data and extract useful insights we also need to combine it with BeautifulSoup.

Advertisement

Continue Reading Below

Scrapy essentially allows you to do both of these in one library.

Scrapy is also considerably faster and more powerful, completes requests to crawl, extracts and parses data in a set sequence, and allows you to shield the data.

Within Scrapy, you can define a number of instructions such as the name of the domain you would like to crawl, the start URL, and certain page folders the spider is allowed or not allowed to crawl.

Scrapy can be used to extract all of the links on a certain page and store them in an output file, for example.

class SuperSpider(CrawlSpider):
   name="extractor"
   allowed_domains = ['www.deepcrawl.com']
   start_urls = ['https://www.deepcrawl.com/knowledge/technical-seo-library/']
   base_url="https://www.deepcrawl.com"
   def parse(self, response):
       for link in response.xpath('//div/p/a'):
           yield {
               "link": self.base_url + link.xpath('.//@href').get()
           }

You can take this one step further and follow the links found on a webpage to extract information from all the pages which are being linked to from the start URL, kind of like a small-scale replication of Google finding and following links on a page.

from scrapy.spiders import CrawlSpider, Rule
 
 
class SuperSpider(CrawlSpider):
    name="follower"
    allowed_domains = ['en.wikipedia.org']
    start_urls = ['https://en.wikipedia.org/wiki/Web_scraping']
    base_url="https://en.wikipedia.org"
 
    custom_settings = {
        'DEPTH_LIMIT': 1
    }
 
    def parse(self, response):
        for next_page in response.xpath('.//div/p/a'):
            yield response.follow(next_page, self.parse)
 
        for quote in response.xpath('.//h1/text()'):
            yield {'quote': quote.extract() }

Learn more about these projects, among other example projects, here.

Final Thoughts

As Hamlet Batista always said, “the best way to learn is by doing.”

Advertisement

Continue Reading Below

I hope that discovering some of the libraries available has inspired you to get started with learning Python, or to deepen your knowledge.

Python Contributions From The SEO Industry

Hamlet also loved sharing resources and projects from those in the Python SEO community. To honor his passion for encouraging others, I wanted to share some of the amazing things I have seen from the community.

As a wonderful tribute to Hamlet and the SEO Python community he helped to cultivate, Charly Wargnier has created SEO Pythonistas to collect contributions of the amazing Python projects those in the SEO community have created.

Hamlet’s priceless contributions to the SEO Community are featured.

Moshe Ma-yafit created a super cool script for log file analysis, and in this post explains how the script works. The visualizations it is able to display including Google Bot Hits By Device, Daily Hits by Response Code, Response Code % Total, and more.

Koray Tuğberk GÜBÜR is currently working on a Sitemap Health Checker. He also hosted a RankSense webinar with Elias Dabbas where he shared a script that records SERPs and Analyses Algorithms.

Advertisement

Continue Reading Below

It essentially records SERPs with regular time differences, and you can crawl all the landing pages, blend data and create some correlations.

John McAlpin wrote an article detailing how you can use Python and Data Studio to spy on your competitors.

JC Chouinard wrote a complete guide to using the Reddit API. With this, you can perform things such as extracting data from Reddit and posting to a Subreddit.

Rob May is working on a new GSC analysis tool and building a few new domain/real sites in Wix to measure against its higher-end WordPress competitor while documenting it.

Masaki Okazawa also shared a script that analyzes Google Search Console Data with Python.

2021 SEJ Christmas Countdown:

Advertisement

Continue Reading Below

Featured image: jakkaje879/Shutterstock





Source link

Continue Reading
Comments

Marketing

SEO in Real Life: Harnessing Visual Search for Optimization Opportunities

Published

on

SEO in Real Life: Harnessing Visual Search for Optimization Opportunities


The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

The most exciting thing about visual search is that it’s becoming a highly accessible way for users to interpret the real world, in real time, as they see it. Rather than being a passive observer, camera phones are now a primary resource for knowledge and understanding in daily life.

Users are searching with their own, unique photos to discover content. This includes interactions with products, brand experiences, stores, and employees, and means that SEO can and should be taken into consideration for a number of real world situations, including:

Though SEOs have little control over which photos people take, we can optimize our brand presentation to ensure we are easily discoverable by visual search tools. By prioritizing the presence of high impact visual search elements and coordinating online SEO with offline branding, businesses of all sizes can see results.

What is visual search?

Sometimes referred to as search-what-you-see, in the context of SEO, visual search is the act of querying a search engine with a photo rather than with text. To surface results , search engines and digital platforms use AI and visual recognition technology to identify elements in the image and supply the user with relevant information.

Though Google’s visual search tools are getting a lot of attention at the moment, they aren’t the only tech team that’s working on visual search. Pinterest has been at the forefront of this space for many years, and today you can see visual search in action on:

In the last year, Google has spoken extensively about their visual search capabilities, hinging a number of their search improvements on Google Lens and adding more and more functionality all the time. As a result, year on year usage of Google Lens has increased by three fold, with an estimated8 billion Google Lens searches taking place each month.

Though there are many lessons to be learned from the wide range of visual search tools, which each have their own data sets, for the purpose of this article we will be looking at visual search on Google Lens and Search.

Are visual search and image search SEO the same?

No, visual search optimization is not exactly the same as image search optimization. Image search optimization forms part of the visual search optimization process, but they’re not interchangeable.

Image search SEO

With Image Search you should prioritize helping images to surface when users enter text based queries. To do this, your images should be using image SEO best practices like:

  • Modern file formats

  • Alt text

  • Alt tags

  • Relevant file names

  • Schema markup

All of this helps Google to return an image search result for a text based query, but one of the main challenges with this approach is that it requires the user to know which term to enter.

For instance, with the query dinosaur with horns, an image search will return a few different dinosaur topic filters and lots of different images. To find the best result, I would need to filter and refine the query significantly.

Visual search SEO

With visual search, the image is the query, meaning that I can take a photo of a toy dinosaur with horns, search with Google Lens, then Google refines the query based on what it can see from the image.

When you compare the two search results, the SERP for the visual search is a better match for the initial image query because there are visual cues within the image. So I am only seeing results for a dinosaur with horns, that is quadrupedal, and only has horns on the face, not the frill.

From a user perspective, this is great because I didn’t have to type anything and I got a helpful result. And from Google’s perspective, this is also more efficient because they can assess the photo and decide which element to filter for first in order to get to the best SERP.

The standard image optimizations form part of what Google considers in order to surface relevant results, but if you stop there, you don’t get the full picture.

Which content elements are best interpreted in visual search

Visual search tools identify objects, text, and images, but certain elements are easier to identify than others. When users carry out a visual search, Google taps into multiple data sources to satisfy the query.

The knowledge graph,Vision AI, Google Maps, and other sources combine to surface search results, but in particular, Google’s tools have a few priority elements. When these elements are present in a photo Google can sort, identify, and/or visually match similar content to return results:

  • Landmarks are identified visually but are also connected to their physical location on Google Maps, meaning that local businesses or business owners should use imagery to demonstrate their location.

  • Logos are interpreted in their entirety, rather than as single letters. So even without any text, Google can understand that that swoop means Nike. This data comes from the logos in knowledge panels, website structured data, Google Business Profile, Google Merchant, and other sources, so they should all align.

  • Knowledge Graph Entities are used to tag and categorize images and have a significant impact on what SERP is displayed for a visual search. Google recognizes around 5 billion KGE, so it is worth considering which ones are most relevant to your brand and ensuring that they are visually represented on your site.

  • Text is extracted from images via Optical Character Recognition, which has some limitations — not all languages are recognized, nor are backwards letters. So if your users regularly search photos of printed menus or other printed text, you should consider readability of the fonts (or handwriting on specials boards) you use.

  • Faces are interpreted for sentiment, but the quantity of faces also comes into account, meaning that businesses that serve large groups of people — like event venues or cultural institutions — would do well to include images that demonstrate this.

Visual Search Element

Corresponding Online Activity

Priority Verticals

Landmarks

Website Images

Google Maps

Google Business Profile

Tourism

Restaurants

Cultural Institutions

Local Businesses

Logo

Website Images

Website Structured Data

Google Merchant

Google Business Profile

Wikipedia

Knowledge Panel

All

Knowledge Graph Entities

Website Images

Image Structured Data

Google Business Profile

Ecommerce

Events

Cultural Institutions

Text

Website copy

Google Business Profile

All

Faces

Website images

Google Business Profile

Events

Tourism

Cultural Institutions

How to optimize real world spaces for visual search

Just as standard SEO should be focused on meeting and anticipating customer needs, visual search SEO requires awareness of how customers interact with products and services in real world spaces. This means SEOs should apply the same attention to UCG that one would use for keyword research. To that end, I would argue we should also think about consciously applying optimizations to the potential content of these images.

Optimize sponsorship with unobstructed placements

This might seem like a no brainer, but in busy sponsorship spaces it can sometimes be a challenge. As an example, let’s take this photo from a visit to the Staples Center a few years ago.

Like any sports arena, this is filled to the brim with sponsorship endorsements on the court, the basket, and around the venue.

But when I run a visual search assessment for logos, the only one that can clearly be identified is the Kia logo in the jumbotron.

This isn’t because their logo is so distinct or unique, since there is another Kia logo under the basketball hoop, rather this is because the jumbotron placement is clean in terms of composition, with lots of negative space around the logo and fewer identifiable entities in the immediate vicinity.

Within the wider arena, many of the other sponsorship placements are being read as text, including Kia’s logo below the hoop. This has some value for these brands, but since text recognition doesn’t always complete the word, the results can be inconsistent.

So what does any of this have to do with SEO?

Well, Google Image Search now includes results that are using visual recognition, independent of text cues. Meaning that for a Google Image Search for the query kia staples center, two of the top five results do not have the word kia in the copy, alt text, or alt tags of the web pages they are sourced from. So, visual search is impacting rankings here, and with Google Imagesaccounting for roughly 20% of online searches, this can have a significant impact on search visibility.

What steps should you take to SEO your sponsorships?

Whether it’s major league or the local bowling league, in order to get the most benefit from visual search, if you are sponsoring something which is likely to be photographed extensively, you should:

  • Ensure that your real life sponsorship placement is in an unobscured location

  • Use the same logo in real life that is in your schema, GBP, and knowledge panel

  • Get a placement with good lighting and high contrast brand colors

  • Don’t rely on “light up” logos or flags that have inconsistent visibility on camera phones

You should also ensure that you’re aligning your real life presence with your digital activity. Include images of the sponsorship display on your website so that you can surface for relevant queries. If you dedicate a blog to the sponsorship activity that includes relevant images, image search optimizations, and copy, you increase your chances of outranking other content and bringing those clicks to your site.

Optimizing merch & uniforms for search

When creating merchandising and uniforms, visual discoverability for search should be a priority because users can search photos of promotional merch and images with team members in a number of ways and for an indefinite period of time.

Add text and/or logos

For instance, from my own camera roll, I have a few photos that can be categorized via theGoogle Photo machine-learning-powered image search with the query nasa. Two of these photos include the word “NASA” and the others include the logo.

Oddly enough, though, the photo of my Women of NASA LEGO set does not surface for this query. It shows for lego but not for nasa. Looking closely at the item itself, I can see that neither the NASA logo nor the text have been included in the design of the set.

Adding relevant text and/or logos to this set would have optimized this merchandise for both brands.

Stick to relevant brand colors

And since Google’s visual search AI is also able to discern brand colors, you should also prioritize merchandise that is in keeping with your brand colors. T-shirts and merch that deviate from your core color scheme will be less likely to make Visual Matches when users search via Google Lens.

In the example above, event merchandise that was created outside of the core brand colors of red, black, and white were much less recognizable than stationary typical colors.

Focus on in-person brand experiences

Creating experiences with customers in store and at events can be a great way to build brand relationships. It’s possible to leverage these activities for search if you take an SEO-centric approach.

Reduce competition

Let’s consider this image from a promotional experience in Las Vegas for Lyft. As a user, I enjoyed this immensely, so much so that I took a photo.

Though the Viva Lyft Vegas event was created by the rideshare company, in terms of visual search, Pabst are genuinely taking the blue ribbon, as they are the main entity identified in this query. But why?

First, Pabst has claimed their knowledge panel while Lyft has not, meaning that Lyft is less recognizable as a visual entity because it is less defined as an entity.

Second, though it does not have a Google Maps entry, the Las Vegas PBR sign has had landmark-esque treatment since it was installed, with features in The Neon Museum and a UNLV Neon Survey. All of this to say that, in this context, Lyft is being upstaged.

So to create a more SEO-friendly promotional space, they could have laid the groundwork by claiming their knowledge panel and reduced visual search competitors from the viewable space to make sure all eyes were on them.

Encourage optimized use-generated content

Sticking to Las Vegas, here is a typical touristy photo of me with friends outside the Excalibur Hotel:

And when I say that it’s typical, that’s not conjecture. A quick visual search reveals many other social media posts and websites with similar images.

This is what I refer to as that picture. You know the kinds of high occurrence UGC photos: under the castle at the entrance to Disneyland or even thepink wall at Paul Smith’s on Melrose Ave. These are the photos that everyone takes.

Can you SEO these photos for visual search? Yes, I believe you can in two ways:

  1. Encourage people to take photos in certain places that you know, or have designed to include relevant entities, text, logos, and/or landmarks in the viewline. You can do this by declaring an area a scenic viewpoint or creating a photo friendly, dare I say “Instagrammable”, area in your store or venue.

  2. Ensure frequently photographed mobile brand representations (e.g. mascots and/or vehicles) are easily recognizable via visual search. Where applicable, you should also claim their knowledge panels.

Once you’ve taken these steps, create dedicated content on your website with images that can serve as a “visual match” to this high frequency UGC. Include relevant copy and image search optimizations to demonstrate authority and make the most of this visibility.

How does this change SEO?

The notion of bringing visual search considerations to real world spaces may seem initially daunting, but this is also an opportunity for businesses of all sizes to consolidate brand identities in an effective way. Those working in SEO should coordinate efforts with PR, branding, and sponsorship teams to capture visual search traffic for brand wins.



Source link

Continue Reading

SEO

Google Shopping Ads With Shaded Backgrounds For Some Results

Published

on

Google Shopping Ads With Shaded Backgrounds For Some Results


Google is using a gray shaded background color for some of the results within the Google Shopping Ads carousel. I was able to replicate this, where for some images, Google thinks a light gray background works better than a white background. This is not tiled, where every other result is shaded, it is based on some other algorithm, maybe the color of the photo of the product?

Here is a screenshot of this:

Saad AK shared videos of this on Twitter:

I am not sure if this is new or not but maybe it is?

Forum discussion at Twitter.





Source link

Continue Reading

SEO

W3C Announces Major Change

Published

on

W3C Announces Major Change


The Worldwide Web Consortium (W3C), the standards body in charge of web standards such as HTML and browser privacy, announced a significant change in how it will operate. Beginning on January 2023, the W3C will become a new public-interest non-profit organization.

Worldwide Web Consortium (W3C)

While many turn to Google to understand how to use HTML elements like titles, meta descriptions, and headings, the W3C created the actual specifications for how to use them.

The W3C is vital for the future of the entire web because it is developing privacy standards with stakeholders from around the world and the technology sector.

Stakeholders such as Google, Brave browser, Microsoft and others are involved in developing new standards for how browsers will handle privacy.

However, there are others with a stake in tracking users across the web via third-party trackers, who also belong to the W3C, who are trying to influence what those new privacy standards will be.

A news report (A privacy war is raging inside the W3C) on the internal W3C struggle for what the future of web privacy will resemble quoted the director of privacy for anti-tracker browser Brave stating that some members of the W3C who are contributing to the discussion are trying to water down privacy standards.

Pete Snyder, Director of Privacy at Brave, said:

“They use cynical terms like: ‘We’re here to protect user choice’ or ‘We’re here to protect the open web’ …They’re there to slow down privacy protections that the browsers are creating.”

What happens to the W3C and how it evolves is important because this is the future of how the web works, including what the future of the privacy standards will be.

The W3C was founded in 1994 by Tim Berners-Lee, the Web’s inventor. The mission of the W3C is to guide the creation of open protocols and guidelines that would encourage the continued growth of the Internet, including web privacy standards.

It is currently a “hosted model,” an international standards-making body hosted in the USA, France, China and Japan.

The decision was made to transition this model to a non-profit that could more rapidly respond to the fast-changing pace of innovation on the web.

The statement notes that the original “hosted model” hindered rapid development.

According to the official announcement:

“We need a structure where we meet at a faster pace the demands of new web capabilities and address the urgent problems of the web.

The W3C Team is small, bounded in size, and the Hosted model hinders rapid development and acquisition of skills in new fields.

We need to put governance at the center of the new organization to achieve clearer reporting, accountability, greater diversity and strategic direction, better global coordination.

A Board of Directors will be elected with W3C Member majority.

It will include seats that reflect the multi-stakeholder goals of the Web Consortium.

We anticipate to continue joint work with today’s Hosts in a mutually beneficial partnership.”

Transition to a New Legal Entity

Although the W3C is transitioning to a new structure, the announcement sought to assure the public that current decision-making processes will remain the same.

They stated:

“The proven standards development process must and will be preserved.”

While they state that the development and decision-making processes will remain the same, the reason for transitioning to a non-profit organization is to enable the W3C to “grow” beyond the original structure since its founding in 1994 for developing web standards for the early web and to “mature” in order to meet the needs of the future better.

The reason for the change was explained in the context of evolution and growth:

“As W3C was created to address the needs of the early web, our evolution to a public-interest non-profit is not just to continue our community effort, but to mature and grow to meet the needs of the web of the future.”

“Grow” means to change, and “mature” means to reach the next stage of change. It’s awkward to reconcile the concepts of change and maturation with the idea of remaining precisely the same.

How can the W3C expect to grow and change to meet future challenges while simultaneously remaining the same?

The W3C claims that the decision-making processes will remain exactly the same:

“Our standards work will still be accomplished in the open, under the W3C Process Document and royalty-free W3C Patent Policy, with input from the broader community. Decisions will still be taken by consensus. Technical direction and Recommendations will continue to require review by W3C Members – large and small.”

While the announcement downplays the changes as just being a change to the “shell” around the W3C, it also states how it operates will evolve and grow.

Fortunately, as long as the W3C conducts all of its business in the open, and nothing is decided except by consensus by all the stakeholders, transparency should help guarantee fairness in the decisions made, regardless of how much the W3C changes (while remaining the same).


Citation

Read the Official W3C Announcement

W3C to become a public-interest non-profit organization

Image by Shutterstock/SvetaZi





Source link

Continue Reading

Trending

Copyright © 2021 Liveseo.com