On Tuesday a one-day conference will report on the early developments of an ongoing 12-month trial aimed at developing a platform that will allow search engines to recognise the terms and conditions of specific newspaper websites.

ACAP, the Automated Content Access Protocol, is a pilot project to create an automated system to allow online publishers to grant search engines, like Google, permission to use their content.

In newspaper site circles the ACAP debate has, thus far, created more heat than light. Yet already one publisher in the scheme has questioned the commitment of search engines to the project.

Earlier this month at the World Editors Forum, in Cape Town, Daniel Neethling from South Africa's Media 24 group - an ACAP participant - told the gathering of newspaper executives that he believed search engines were 'involved but not committed' to the pilot scheme.

During the same conference Google ran two technical seminars in a bid to convince newspaper publishers to adopt its preferred standard for statements of permission and prohibition for web pages.

Representatives of Google went to great lengths to stress how there was no need to develop a new protocol, as robot.txt files could essentially fulfil the job and were already used extensively across the web.

The search giant is not a member of the ACAP project, but it is a keen observer that is likely to have a vested interest in any future standards that determine how search engines use content.

"Robot.txt is the current industry standard, it is used by millions of websites. We think it solves most of the existing problems that ACAP want to solve; it's an existing protocol that they can use," Dan Crow, product manager of crawl systems at Google, told Journalism.co.uk.

"The needs of publishers are not so unique that they couldn't be solved with a tool that wouldn't be applicable or useful to everyone else," Mr Crow added.

"We think that robots is a good standard and it is been widely adopted. There are already so many sites using robot.txt out there, rather than invent something new, why not start with something that works most of the time and is already widely adopted?"

Participants of ACAP - which include AFP, Media 24 and Independent news and Media - and Google initially seem to be intent on solving the same problem - one of how to recognise and respect the rights of publishers' content while still indexing it and making it searchable.

Yet Mr Neethling told Journalism.co.uk that he did not believe that robot.txt files were an acceptable standard for the project and more work was needed to find a better system.

There is, however, also a secondary point dismissed by Mr Crow as not even forming part of Google's discussions with ACAP. That is that ACAP could offer a potential mechanism for an accurate tracking and payment scheme.

In his speech on the opening day of the conference, Mr Neethling said the uncontrolled use of content on the internet had the potential to destroy the livelihood of content producers.

He also told Journalism.co.uk that, although it was secondary to establishing ACAP in its primary role (as a system to recognise terms and conditions), he considered that something on which a payment system could be built was desirable.

London-based firm Rightscom, commissioned to act as project co-ordinator by the organisations behind the initiative - the International Publishers Association, the World Association of Newspapers and the European Publishers Council - says it has no involvement in this area and that any commercial discussions are ones to be left to the publishers and their partners.

Perhaps Tuesday's meeting will spread light, as well as heat, and fill the gaps between the expectations of some content producers and the search engines.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).